Is Geoengineering Research Objectionable?

Geoengineering is the idea of putting materials–say, certain aerosols–into the atmosphere to counteract the effects of carbon and other greenhouse gases. I’ve written about the technology and arguments a few times: for examples, see here and here.

But when small-scale experiments with this approach are proposed, like a recent effort in Sweden, there are often strong objections of the “slippery slope” variety: that is, there’s probably nothing especially dangerous or wrong with this particular limited experiment. But looking ahead, one risk is that future larger-scale experiments may pose larger risks. Also, if this research leads many people think that there is likely to be a cheap and easy techno-fix for climate change a few years down the road, they will be less likely to support near-term efforts to reduce carbon emissions. Daniel Bodansky and Andy Parker address these arguments in “Research on Solar Climate Intervention Is the Best Defense Against Moral Hazard” (Issues in Science and Technology, Summer 2021). T

Their essay suggests that the case against research experiments in geoengineering is shaky on several grounds:

  1. What if the results of the experiment suggest that geoengineering is not a plausible or workable idea? Bodansky and Parker point the example of a previous set of experiments on “ocean iron fertilization,” the idea proposed back in 1988 was that fertilizing the ocean with iron would create large algae blooms that would draw carbon dioxide from the atmosphere, and then would carry that carbon to the bottom of the ocean as the algae died. One early researcher in the area reportedly joked, “Give me half a tanker of iron and I’ll give you another ice age.” There were a dozen small-scale field experiments (here’s a review from 2012). But the experiments suggested the approach would not be very effective and might have negative side effects. So among experts who favor a broad array of efforts to reduce carbon emissions, this particular idea is not considered relevant. To put it another way, an unresearched idea will always have a certain attraction, especially in an emergency situation. A researched idea can easily seem less attractive.
  2. When people are confronted with a discussion of geoengineering, they often become more willing to consider the other responses. Bodansky and Parker write:

A] team at Yale University sought to test directly the moral hazard argument by assigning study participants in the United Kingdom and United States to two groups: one group was given information about climate intervention as a response to global warming; the other was given information about regulating pollution. The study’s results were remarkable. The researchers found that the group exposed to information about climate intervention was slightly more concerned about climate change risks. That is, they found evidence of a reverse moral hazard response. This research might be dismissed as an academic curiosity, but the same reverse moral hazard effect has been observed using different study methods in GermanySwedenthe United States, and the United Kingdom

It’s easy to imagine an underlying dynamic here. Imagine someone who is a little skeptical about the science behind climate change. When that person is confronted with a discussion of climate intervention, they start thinking, “Gee, if altering the atmosphere is under consideration, then non-carbon energy subsidies, energy efficiency efforts, and a carbon tax don’t sound so bad.”

3) Although Bodansky and Parker don’t emphasize this theme, there are people who have been arguing for some years now that the world was soon about to pass a threshold where, because of carbon and other greenhouse gases in the air, the risks of climate change would become irreversibly high. For the sake of argument, let’s say that those claims and predictions aren’t just posturing and exaggeration in an attempt to stir up a more aggressive policy response, but are literally true. In other words, say that the world reaches a point (or has already reached a point?) where the clean energy/fuel efficiency/carbon tax agenda for reducing risks of climate change is already too late, and something else needs to be done. In that situation, knowing more about when, where, and how climate intervention efforts might be done, in a way that promises the most benefit for the smallest risk, might be pretty important.

Finally, I’ll just add that if those concerned about climate change want to use a slogan of “follow the science”–which is I think is perhaps their strongest argument–then it’s a bad look to start arguing that certain kinds of science shouldn’t be followed.

Wealth and Income Inequality: Only a Weak Correlation?

Income refers to what is received in a certain period of time, which is why we refer to “annual income” or “weekly paycheck.” Wealth refers to the total that has been accumulated over time, which is why we refer to a “retirement account.” It feels as if there should be an intuitive connection between them: for example, greater inequality of incomes should be accompanied by greater inequality of wealth. This connection is true for the US economy. But across economies of western Europe, the correlation between income and wealth inequality is close to nonexistent. Fabian T. Pfeffer and Nora Waitkus present the data in “The Wealth Inequality of Nations” (American Sociological Review, 2021, forthcoming).

Pfeffer and Waitkus use data from the Luxembourg Income Study, which goes to considerable lengths to measure national income and wealth in a way that is common across countries. They look at data for the the US, Canada, and the rest from Europe: Austria, Australia, Finland, Germany, Greece, Italy, Luxembourg, Norway, Slovakia, Slovenia, Spain, Sweden, and the United Kingdom. They measure inequality in each country in two ways: with the Gini coefficient, a standard metric that expresses inequality on a scale from 0 to 1, and by the share of income or wealth held by the top 5% of the population. Here is one of their figures:

As you see, the US economy is in the upper right corner of both figures: that is, the US is high on inequality of income and wealth. But what’s perhaps surprising is that if you leave the US out of the comparison, there doesn’t seem to be any correlation between inequality of income and wealth. The dashed line shows the correlation between the two types of inequality with US data included; the soline line shows the lack of a correlation (that is, a flatter line) with the US data excluded.

For example, consider the top panel. Sweden and Norway are not far behind the US in wealth inequality (horizontal axis), but rank low in income inequality (vertical axis). Conversely, Italy and Spain are not far behind the US in income inequality, but much lower in wealth inequality.

The authors summarize: “[I]nternational differences in income inequality tell us close to nothing
about international differences in wealth inequality. In fact, many countries that we customarily describe as comparatively egalitarian using income-based comparisons (e.g., Scandinavian countries) can be classified as anything but in terms of their levels of wealth inequality. Many countries that are similarly
unequal in terms of income (e.g., Germany and Greece) differ greatly in terms of their level of wealth inequality (with Germany displaying very high levels). “

What lies behind this difference? The authors emphasize that they don’t have a full answer, but as guidance to an answer they break down wealth into categories like housing wealth, financial assets, non-housing real assets (which can include personally-owned business assets), debt, and so on. The breakdown suggests that inequality of housing assets plays a crucial role:

[T]he overall distribution of housing equity, of which the prevalence of homeownership is just one aspect, is the central element accounting for overall wealth inequality. A country’s distribution of housing equity explains its overall level of wealth inequality and concentration to a substantial degree, including both the outlying position of the United States and the overall variation across many different countries. This is not to say the strong concentration of financial assets and business equity at the top of the wealth distribution in most countries is unimportant. In fact, a focus on financial
assets and business equity is likely central to understanding elite closure and the accelerating
wealth accumulation of the top 1 percent …

Cross-national differences in income inequality do not predict cross-national differences in wealth inequality, because the latter are most centrally driven by housing equity. In turn, the distribution of housing equity, we argue, is crucially determined by financialization and housing market dynamics,
that is, in institutional spheres outside the labor market and the classical realms of the welfare state.

Here’s a final caveat about this kind of work. The author’s write: “Our analysis, in line with most other wealth research, applies a definition of net worth that does not include public pensions nor most other forms of employer-provided pensions.” Thus, if I work at a job where the employer promises a pension, that isn’t part of my “wealth.” However, if I work at a job where my employer helps contribute to a retirement account which will pay for my living expenses after retirement, then that retirement account is part of my “wealth.” There are reasonable reasons for this distinction–for example, my personal account can be left as part of an inheritance to the next generation–but it’s still worth keeping in mind. After all, if employer-promised pensions should be considered part of a person’s wealth, then what about government-promised old-age pensions, like Social Security in the United States?

Rethinking the Great Housing Bust of 2007-8

There’s a standard story about the underlying causes of the Great Recession of 2007-9. I’m sure I’ve told it myself a time or two. It starts with excessively easy lending for home mortgages–including the so-called “sub-prime” loans–aided and abetted by financial engineering that repackaged these loans as “collateralized debt obligations” and sold them to investors, including banks. The surge of mortgage lending helped to stimulate home prices to unrealistically large annual increases. But reality eventually catches up. Home prices drop from their unrealistic highs, borrowers are unable to repay their mortgages, the financial sector seizes up, and recession is unleashed.

This story isn’t exactly wrong, but the path of housing prices since 2008 suggests that it isn’t completely right, either. Because remember those home prices that were “unrealistically” high in 2006 and then fell back to earth in 2007-8? Those housing prices have since rebounded back to where they were before the Great Recession.

The figure shows several indexes of housing prices. I will just say briefly (before the light of interest in the eyes of readers begins to fade) that there’s an interesting problem in how to measure home prices over time. If one just looks at prices of recent sales, for example, it’s likely that you will tend to overvalue newer, more expensive homes. Ideally, you want to look at the price of homes of equivalent quality, which is done by using a “repeat sales” method–that is, it looks at the price of a given home that is sold (or the mortgage is refinanced) more than once, so that you can then calculate the rise in the price of homes of unchanged quality over time. The housing price index then combines all the data from these repeat sales of individual homes. Both the “All Transactions” indexes below from the Federal Home Finance Agency and the Case-Schiller Index now published by S&P Corelogic use versions of the repeat sales approach, with some differences in sample size and aggregation method.

Using 20-20 hindsight, it looks as if there has been an overall rise in home prices since the mid-1990s, with a speed bump in the middle of the rise. The blue and red lines show data for the US economy as a whole; the purple line focuses on California, where the housing price rise was faster, the fall was larger, and the ensuing rise has been faster, too. It looks rather as if the higher prices of homes back circa 2006 were in fact not grossly unreasonable when viewed from the perspective of a decade or so later.

How does one make sense of this pattern? Gabriel Chodorow-Reich, Adam M. Guren, and Timothy J. McQuade offer an analysis in “The 2000s Housing Cycle with 2020 Hindsight: A Neo-Kindlebergian View (NBER Working Paper 29140, August 2021). “Kindlebergian” is a grotesque coinage referring to the work of Charles Kindleberger, whose 1978 historical overview of Manias, Panics, and Crashes: A History of Financial Crises is a touchstone for anyone trying to understand cycles of boom and bust. This research paper has the normal dose of mathematical and statistical tools, so I can’t recommend it for general readers. But the conclusion lays out the underlying logic of how a reasonable and understandable rise in house values, based on fundamental factors like local income, amenities, and supply factors, can for a time get ahead of itself. That is, prices rise faster for a time than makes sense based on the underlying fundamentals, but then after a correction, prices move to a new equilibrium. They write:

We revisit the 2000s housing cycle with “2020 hindsight.” At the city level, the areas with the largest price increases during the housing boom from 1997 to 2006 had the largest busts from 2006 to 2012 but also the fastest growth after the trough, and as a result have had the largest price appreciation over the full cycle. We present a standard spatial equilibrium framework of house price growth determined by local income, amenities, and supply determinants and show this framework fits the cross-section of city house price growth between 1997 and 2019. The implied long-run fundamental is correlated not only with long-run price growth but also with a strong boom-bust-rebound pattern.

Our neo-Kindlbergererian interpretation emphasizes the role of economic fundamentals in setting off this asset price cycle. In our model, the boom results from over-optimism about an increase in the “dividend” growth rate, the bust ensues when beliefs of home buyers and lenders correct, exacerbated by a price-foreclosure spiral that pushes prices below their full-information level, and eventually a rebound emerges as the economy converges to a price path commensurate with fundamental growth. We also acknowledge other features of the episode, including changes in credit supply and speculation, but conclude that these forces cannot substitute for the role of fundamentals as a driving force. … Our findings suggest that while policy may want to temper over-optimism and aggressively mitigate foreclosures, it is also important not to suffocate fundamentally-driven growth. Of course hindsight is 20-20; distinguishing fundamental growth from over-optimism in real time rather than after observing a full boom-bust-rebound poses a formidable task.

Of course, demonstrating that it is possible to construct a model reflecting the pattern of home prices in the last 25 years or so does not prove that the particular model used is the correct one. It seems to me that the rock-bottom interest rates since the Great Recession have also played a role in keeping housing prices high. But that said, those who averred that US housing prices were ridiculously and unsustainably high circa 2006 were apparently correct about the short-run, but not the medium-run.

Interview with Ayşegül Şahin: Quits and Start-ups

David A. Price has an interview with “Ayşegül Şahin: On age growth, labor’s share of income, and the gender unemployment gap” (Econ Focus: Federal Reserve Bank of Richmond, Second/Third Quarter 2021, pp. 18-22). I had not known, for example, that Şahin had completed the coursework for a doctorate in electrical and electronics engineering when she first encountered economics, and decided instead to change over and pursue a different PhD. Here are two of the comments that caught my eye, but there’s much more in the interview itself:

Differences in the labor market recovery after the Great Recession and the pandemic recession

What was striking about the Great Recession was its persistence. Everybody kept saying at the time that inflation is around the corner, the labor market is getting tighter, but it took a very long time for the labor market to heal. We are not seeing that this time. This was a very different shock. It was sharp, but it was transitory compared to the Great Recession. So the effect was great, but the recovery has been faster as well. I think that’s the main difference. Another big difference is that the Great Recession was a big shock to the construction sector, and we are seeing the opposite now. We’ve been spending more time at our houses and people want to improve their houses and they want bigger houses. …

But the biggest difference is the persistence. After the Great Recession, it took quits rates five or six years to recover. Today, the quits rate is already back to where it started from before the pandemic hit … The quits rate is the number of quits during the entire month as a share of total employment. The quits rate was in Janet Yellen’s dashboard when she was the chair, actually, so lots of people started paying attention to it. … But when there’s a recession, quits go down because people become more risk averse. They don’t want to risk unemployment. So if you don’t like your boss or you don’t like your career, you just say, “OK, I’d better wait a little bit more.” During the Great Recession, this aversion to quitting lasted for a long time. As a result, people were stuck in jobs that they were not necessarily happy about or they were not very productive at. But in this recession, quits rates bounced back quickly. One reason is because there are a lot of job openings; the second is that people want to go back and find jobs that they are better matched at.

The Long-Term Decline in US Start-up Rates

Startups are important for various reasons. First of all, they are important areas of job creation and productivity growth. I have worked on this in the last five or six years, and what we have found is that the declining startup rate is a consequence of the declining growth rate of the labor force in the U.S. economy. … With the declining labor force growth rate, we also started seeing a decline in the startup rate. You can think of the startup rate as the birth rate of firms.

What happens when the birth rate goes into decline is that the population gets older after a while. The same thing has happened with U.S. firms. What does it mean when more firms are older? Older firms are more stable, but they are also slower. They create fewer jobs, which accounts for part of the decline in job creation. An economy like this is more stable — the unemployment rate tends to be lower — but it also has lower productivity growth. That accounts for a lot of trends we have been seeing in the U.S. economy. … [W]hen you look at different sectors and different locations, as we did — we looked at around 10,000 labor markets — you see a decline in startups in more than 90 percent of them. The point that we are making is that there seems to be a common factor affecting almost all the markets in the U.S. economy. …

[I]f you look at the startup rate of the manufacturing sector in 1980, you could have already predicted that this sector’s employment was going to decline over time. That’s because its employment share was way higher than its startup employment share. The entry or lack of entry of startups into a sector gives you information about its condition before you see existing firms exiting the sector. The startup activity that is happening now is another sign of reallocation. Where the startups are entering will be informative in terms of where the economy is going in the near future.

Some Snapshots of US Income Inequality

The Congressional Budget Office has published “The Distribution of Household Income, 2018” (August 2021). It takes a couple of years to pull this data together in a reliable way. The report is full of data and figures about inequality of income over time, together with what the patterns of inequality would look like with adjustments for non-market income (like Social Security), taxes paid, and means-tested benefits received. Here are a few of the images that caught my eye.

A variety of the figures confirm the well-known fact that inequality of US incomes has expanded in recent years, with the biggest gains at the very top of the income distribution. For example, the upper figure shows average income in dollars for the top quintile (fifth) compared to the rest of the income distribution. The second panel shows cumulative growth over time as a percentage amount. These calculations look only at income, without taking taxes or transfer payments into account.

The next figure breaks down the top fifth into smaller categories, and shows cumulative income growth for these subgroups. As you can see, cumulative income growth (again, not counting taxes or transfers) has been fastest at the very top for the growth for the top 0.01 percent of the income distribution.

If we take taxes into account, an interesting pattern is that the average federal tax rate for the top 1% and the top fifth haven’t changed much since the 1990s. (This calculation combines all federal taxes paid, not just income tax.) However, average federal tax rates have fallen during that time for the rest of the income distribution, especially the bottom quintile. Thus, the share of taxes paid by those with higher income as been rising over time.

What if we now focus on income after taxes and transfer payments? Here, an interesting pattern emerges in the bottom panel of the next figure: the percentage growth in income for the bottom fifth is actually fairly close to that for the top fifth: it’s the middle three-fifths that has slower income growth by this measure. One underlying reason, noted above, is that taxes have fallen by more for the bottom fifth. Another reason is that the cost of health care means that Medicaid payments for the bottom fifth have risen.

Finally, here’s a measure of how the inequality of the US income distribution has been shifting using the Gini coefficient, a standard measure used in these comparisons.

The top line shows rising income inequality is market incomes. The second line shows lower inequality, but still rising if one includes nonmarket sources of income like Social Security and Medicare. The third line adds means-tested benefits like food stamps, Medicaid, welfare payments, and so on. These reduce inequality further, and you will notice that with these adjustments, the rise in inequality is to some extent flattened out. The bottom line is income after taxes and transfer payments. By this measure, it’s striking to notice, the rise in inequality since about 1990 has been almost entirely leveled out. The Gini coefficient for this measure was .437 in the US economy in 2018; for comparison, it was .435 back in 2000 and .425 in 1986. In other words, the rising inequality in market incomes in the last few decades has been mostly offset by changes in federal government tax and transfer payments.

It’s worth noting that the Gini coefficient measure, while very commonly used, has all the standard problems that occur when you try to boil down the entire income distribution to a single number. In this case, in particular, remember the figure above showing the rise for the lowest quintile after taxes and transfers were taken into account. Thus, one way to think about the shifting inequality of the income distribution over time is that the top fifth and the bottom fifth have grown by more than the middle. Overall, this pattern means that overall the distribution of income has not become more unequal. But if you’re in the middle three-fifths, you haven’t been keeping up.

Whither Battery Power?

One possible clean energy agenda is “electrify everything,” with the idea being to focus on carbon-free methods of generating electricity. However, a number of carbon-free methods, like solar and wind (and even hydro-power at some times and places), have the problem that the power generated can ebb and flow. (Nuclear and geothermal power are examples of carbon-free electricity that is not interruptible.) Are there reliable ways of storing sufficient electricity for those times–which can easily be days and might even be weeks– when the sun doesn’t shine and the wind doesn’t blow? The energy storage question may well determine the workability of the “electrify everything” agenda.

One obvious answer is to store electricity in rechargeable lithium-ion batteries, but doing this at sufficient scale will require dramatic improvements in the price and capacity of batteries. Micah S. Ziegler a and Jessika E. Trancik discuss technoloigicl progress in this area in “Re-examining rates of lithium-ion battery technology improvement and cost decline” (Energy and Environmental Science, 2001, issue 4, pp. 1635-1651). They write:

Energy storage technologies have the potential to enable greenhouse gas emissions reductions via electrification of transportation systems and integration of intermittent renewable energy resources into the electricity grid. Lithium-ion technologies offer one possible option, but their costs remain high relative to cost-competitiveness targets, which could hinder these technologies’ broader adoption … However, their deployment is still relatively limited, and their broader adoption will depend on their potential for cost reduction and performance improvement. Understanding this potential can inform critical climate change mitigation strategies, including public policies and technology development efforts. … Here we systematically collect, harmonize, and combine various data series of price, market size, research and development, and performance of lithium-ion technologies. We then develop representative series for these measures, while separating cylindrical cells from all types of cells. For both, we find that the real price of lithium-ion cells, scaled by their energy capacity, has declined by about 97% since their commercial introduction in 1991. We estimate that between 1992 and 2016, real price per energy capacity declined 13% per year for both all types of cells and cylindrical cells, and upon a doubling of cumulative market size, decreased 20% for all types of cells and 24% for cylindrical cells.

For me, several lessons emerge from their essay: 1) Technological progress in rechargeable lithium-ion batteries has been larger than I realized; 2) It’s not nearly enough, as yet, to make heavily reliance on solar and wind energy possible; and 3) We all need to start divide up our thinking about batteries, in the sense that the batteries that power portable electronics may look quite a bit different from, say, batteries installed to store energy for homes, neighborhoods, or factories.

For example, in California there is an enormous lithium-ion battery facility mostly completed, which “will be able to discharge enough electricity to power roughly 300,000 California homes for four hours.” Another facility is planning to use a group of Tesla batteries so that there will be enough electricity storage to power all homes in San Francisco for six hours. Similar facilities are under consideration or actually being built around the world. These kinds of facilities can serve a useful purpose in avoiding electricity outages, and dealing with situations where power demand spikes above supply for a few hours. They can replace the need for backup generating capacity that would otherwise be called on in these situations. But if you want to rely heavily on wind and solar power, and to be very sure that the power won’t go off for an extended time, these sorts of industrial-sized battery facilities are only a start.

However, another issue with batteries is that manufacturing them requires a surge of mining and extensive use of minerals, and then recycling or disposing of the giant industrial-size batteries requires additional efforts. Thus, the debate over storing energy in batteries often drifts into a related debate about other forms of storing electricity.

For example, one approach is pumped-hydropower storage. The basic idea here is to use carbon-free energy to pump water from a lower river or lake so that it is up behind a dam–where it can then run down and generate electricity. There are now 43 of these projects in the US. The International Hydropower Association writes: “Pumped storage hydropower is the world’s largest battery technology, accounting for over 94 per cent of installed energy storage capacity, well ahead of lithium-ion and other battery types.”

What about if you aren’t near hydropower? In that case, I’m reading a little more these days about “gravity-based batteries.” The idea here is that if you can store energy by using carbon-free electricity to pump water up above the dam, so that it will run a turbine when gravity pulls the water back down, why not use the carbon-free electricity to lift up a heavy weight, and then let the weight drive a turbine as it comes back down? The gravity-battery technology is quite immature. But for the sake of argument, imagine the possibility of using an abandoned mineshaft that runs straight down for a few kilometers, and retrofitting the mine as a gravity battery.

Cathleen O’Grady offers an overview of gravity batteries in “Gravity-based batteries try to beat their chemical cousins with winches, weights, and mine shafts” (Science, April 22, 2021). I have no clear idea if gravity batteries will at the end of the day store a meaningful amount of power. But some of the early studies suggest that it could be cost-competitive with industrial-battery technology. Moreover, hoisting up and lowering big weights is a much cleaner environmental proposition than manufacturing and disposal of batteries.

For the sake of completeness, I should add a mention of hydrogen fuel here. If the hydrogen is generated by a non-carbon technology, then it offers yet another way of storing energy for later use. I’ve written about possibilities and issues with hydrogen power before, and won’t repeat it here. But it’s perhaps worth saying that while there’s been a lot of talk about about hydrogen fuel cells as a source of energy for vehicles, my sense is that the current technology may be better-suited for applications that provide heating, cooling, and electricity for homes and buildings.

Global Food Production: Too Little and Too Much

On one hand, it seems terribly important that global food supply increase dramatically in the next few decades, to feed the hungry around the world as global population rises. On another hand, obesity is a huge and ongoing health problem around the world, even in many countries that do not have high income levels. And on yet a third hand, food production around the world is often a major contributor to environmental degradation, being related to issues including water pollution, deforestation, pesticides that linger in the ecosystem, and release of greenhouse gases.

A desirable path for the future of global food production will take all of these into account. The Credit Suisse Research Institute takes a shot at reconciling them in “The Global Food System: Identifying sustainable solutions” (June 2021). Let’s start with a description of the competing priorities:

About 9% of global population, consisting of about 700 million people, is undernourished.

About 40% of the world population is overweight.

When it come to environmental issues the report notes:

Food production and consumption already contribute well over 20% to global greenhouse gas emissions and account for more than 90% of the world’s freshwater consumption. After reviewing the environmental footprint of all major food groups, we conclude that the current situation is likely to worsen significantly unless action is taken. The likely growth in the world’s population to around ten billion people by 2050 coupled with a further shift in diets, especially across the growing emerging middle class, could increase emissions by a further 46%, while demand for agricultural land could increase by 49%. … [T]he growth in agricultural land seen to date has come at the cost of greater deforestation. Data from Globalforestwatch suggest that annual tree loss cover has increased from around 14 million hectares in 2001 to around 25 million hectares in 2019 … . The FAO indicates that some 420 million hectares of forest has been lost since 1990, which is the same as roughly eight times the size of France or 50% of the USA. Deforestation not only releases stored carbon dioxide, but
also reduces the ability to capture future carbon releases. Furthermore, it contributes to a loss in biodiversity and puts pressure on soil quality, which in turn is seen as contributing to the risk of drought and floods.

What’s the pathway through this maze of concerns? Start with the environmental issue. Here’s a table that tries to compare environment costs of a variety of foods. I’m sure one can quarrel with the details, but the broad nature of the overall rankings is clear. Vegetables and fruits in general have lower environment effects. Meat and dairy, and beef in particular, have the highest environmental effects.

As it turns out, many of the human health issues of obesity are related to consumption of meat and dairy, and more broadly to limited consumption of vegetables and fruits. And when it comes to expanding calorie outputs for a growing world population, it’s probably efficient to do so by expanding non-meat alternatives.

Along with a shift away from meat in general and beef in particular, there are some other useful steps to be taken. One is to take food waste seriously as a policy concern:

[M]ore than 30% of food produced is either lost or wasted. By way of example, around USD 408 billion of food produced in 2019 went unsold or uneaten. The FAO estimates the economic, environmental and social costs associated with food waste at USD 2.6 trillion. Eliminating food waste in the United States and Europe alone would add 10% to the world’s available food supply. Solutions need to focus across the entire supply chain as about 50% of food is lost in the production and handling phase, while 45% is wasted in the distribution and consumption phase.

The agenda for reducing food waste often focuses on issues like improved storage, faster transportation, and recognizing alternative uses that will give a stable shelf-life to food that would otherwise have spoiled. To cite one of a number of examples from the report: “Baldor, a major food processor that makes products like `baby’ carrots (i.e. regular carrots carved into tiny pieces), turns fruit and vegetable scraps into multiple products: some fruit scraps go to juice companies, vegetable scraps go to chefs for use in stocks, a mix of vegetables are dried and crushed into a flour that can be used in place of wheat, and other scraps are used in meal kits that include veggie noodles.”

Another set of options involve bringing the technology revolution to farming. For example, the report suggests that “[p]recision farming through the use of artificial intelligence, drones, autonomous machinery and smart irrigation systems could yield productivity increases of 70% by 2050.”

Another option is “vertical farming,” “which is an indoor approach consisting of controlling all environmental factors such as light, humidity and temperature, with the aim of producing more
food by harvesting crops vertically. This concept enables the cultivation of various crop types
ranging from leafy greens and tomatoes to herbs and flowers, as well as microgreens, and fulfills
environmental, social and economic goals. … According to the Ellen MacArthur Foundation, it is possible that, by 2050, 80% of the food consumed in urban areas could be produced using vertical farming
technologies.” In fact, Netherlands is the world’s #2 exporter of agricultural products by value thanks to its embrace of these kinds of technologies.

In thinking about a shift away from traditional meat, several options have been getting a lot of attention. “[L]ivestock provides just 18% of calories consumed by humans, but takes up close to 80% of global farmland.”

One option is “plant-based meat” products like Beyond Meat and Impossible Foods. “The reason for supporting the growth of plant-based meat is that it uses 72%–99% less water and 47%–99% less land than traditional animal-based meat. In addition, water pollution is substantially lower, whereas GHG emissions are also between 30% and 90% lower. One other aspect worth highlighting is that plant-based meat does not require the use of antibiotics, which is very common with animal-based meat production.”
Another option is “cultivated meat,” which is meat grown directly from cells. “For example, cultivated meat has a feed conversion ratio (kg in per kg out) that is more than seven times higher than that of beef cattle and almost six times higher than that of pork.” Yet another option is the use of fermentation: “Alternative proteins can also be produced through fermentation processes using microorganisms. Traditionally, fermentation has been used to make beer, wine and cheese, and the same process
can be used to improve the flavor of plant ingredients. … Biomass fermentation has the clear advantage of speed. The doubling time of the microorganisms used is hours compared to months or longer for animals.”

When thinking about these alternative products, I always add two thoughts in my own mind. First, it seems likely that the potential productivity gains for these products are high, which suggests that it will be possible to drive down their price dramatically. If a plant-based “hamburger” at the fast-food drive-through was half the price of ground beef, or less than half the price, my guess is that I wouldn’t be alone in being willing to make the shift. Second, as long as we think about these products as meat substitutes, I suspect they will feel unsatisfying. But it’s relatively straightforward to tinker with the taste of plant-based product, and eventually some of these products will be created that aren’t viewed as substitutes for something, but instead their popularity will stand on its own.

Reports like this one often jump pretty quickly from a list of problems to discussions of how government regulation and requirements, and this report is no exception. Many jurisdictions are passing taxes on sugared drinks; is a tax on meat next? I’m agnostic on much of this policy agenda, which is to say that I’ll judge the individual proposals as they come along. I suspect that the pull and push of demand and supply will bring about many of these changes, as the world moves toward feeding a few billion more people at a time of rising environmental concerns. Thus, I tend to see this report as a forecast of where we are already heading.

An Earn-and-Learn Career Path

What paths can high school students take in accumulating hard and soft skills so that they can make the transition to a career and job? The main answer in US society is “go to college.” But for a large share of high school graduates, being told that they now need to attend several more years of classes is not what they want to hear.

Historically, a common alternative career path was that companies hired young workers who had little to offer other than energy and flexibility, and then trained and promoted those workers. Unions often played a role in advocating and supporting this training, too. But in the last few decades, it seems that a lot of companies have exited the job training business. Their general sense is that young adults aren’t likely to stay with the company, so in effect, you are training them for their next employer. Instead, better just to require that new hires already have experience.

The earn-and-learn career path tries to steer between these extremes. Yes, it involves additional learning, because that’s what 21st century jobs are like, but it seeks to have that learning take place more in the workplace than in the classroom. Also, instead of paying to learn, you get paid while learning. On the other side, firms that participate in this kind of training don’t need to take on the entire responsibility and cost of doing so.

Annelies Goger sketches this framework in “Desegregating work and learning through ‘earn-and-learn’ models” (Brookings Institution, December 9, 2020). She points out the gigantic difference in public support for higher education vs. public support for an earn-and-learn approach.

The earn-and-learn programs under the public workforce system—authorized under the Workforce Innovation and Opportunity Act (WIOA)—are underused and hard to scale. Publicly funded job training options are tiny overall compared to investments in traditional public higher education or classroom-based job training. Funding for public higher education was $385 billion in 2017-18, compared to about $14 billion for employment services and training across 43 programs. The net result is that higher education is the main provider of publicly funded training for most Americans, and most of the $14 billion for employment services and training goes to services (most of which isn’t training) for special populations such as veterans and people with disabilities.

This difference in public support is even more stark when you recognize that those with a college education are likely to end up with  higher average incomes during their lives, so that we are doing more to subsidize the training of the relatively high earners of the future than we are to subsidize the training of the middle- and lower-level earners.

What do the earn-and-learn programs look like in practice? Here’s a graphic:

Fig1

I won’t try to go through these choices one at a time: for present purposes, the salient fact is that they are all small in size. As long-time readers know, I’m a fan of a dramatic expansion of apprenticeships (for example, here, here, here, and here). But as Goger writes:

For example, the U.S. had roughly 238,000 new registered apprentices in 2018. However, if the U.S. had the same share of new apprentices per capita as Germany, we would have 2 million new apprentices per year; if we had the same share as the United Kingdom or Switzerland, that number would be 3 million.

Can Administrative Health Care Reforms Get the Administrative Cost Advantages of Single Payer?

One infuriating aspect of the US health care industry is the high administrative costs. Here’s a description from David Scheinker, Barak D. Richman, Arnold Milstein, and Kevin A. Schulman in “Reducing administrative costs in US health care: Assessing single payer and its alternatives” (Health Services Research, published online March 31, 2021, not yet assigned to an issue). They write (footnotes omitted):

The transaction cost of paying for services with a commercial credit card is approximately 2% of the total cost, whereas Tseng (2018) calculated that it is 14.5% when providers bill insurance companies for physician services. A similar percentage is consumed for hospital billing, and approximately an additional 15% is retained by commercial insurers for claims processing and other costs under the Affordable Care Act.

Health care administrative costs in the United States are higher than in other rich nations. Estimates suggest over $265 billion of annual spending is wasted due to administrative complexity, yet the substantial literature on wasted health care spending offers little discussion of what drives these costs or how to reduce them. … The most common proposal to reduce the transaction costs of paying for health care has been to advocate for a single-payer “Medicare-for- All” model that nationalizes Medicare fee-for-service coverage, with all of that program’s complexity.

There are at least three possibilities for making a meaningful reduction in administrative spending in the US health care sector: a single payer system, a set of rules that would require standardization and simplification of health care contracts, and a national health care automated clearinghouse for payments to health care providers. Let’s mull them over in turn.

Perhaps the main difficulty with seeking to reduce US health care administrative costs by the use of a single-payer system is that, for better or worse, there seems to be approximately zero political momentum for the US to adopt such a system.

However, an additional concern is that single-payer systems come in many different models. Yes, if there is a single payer for all health care expenses who just sends out checks to all health care providers, administrative costs will lower. But given the very high that the US health-care system is about one-fifth of the entire US economy, and rising, even a single-payer system will need to have various controls over what prices are being charged for what services and what services are covered. Most single-payer systems around the world are combined with overall limits on hospital spending, physician fees, and limits on technology. In other words, part of the reason for lower administrative fees in single-payer systems around the world is that the choices available in what services to offer and what to charge have already been tightly limited.

In an essay on “Reducing Administrative Costs in U.S. Health Care,” David M. Cutler acknowledges that a single-payer system would certainly eliminate some administrative costs, like the costs sales and marketing, as well as the profits, of private insurance companies (March 2020, Hamilton Project, . However, if a US single-payer system did not also include limits on fees and available services, administrative costs will remain substantial. Cutler writes (citations omitted for readability)

Perhaps the major issue in single-payer health care is the trade-off between administrative costs and other ways of controlling use. Many single-payer proposals envision lower administrative costs such as those in the Canadian system. Such a system has other important features, however, that are necessary to offset administrative spending reductions. For example, Canada has very tight restrictions on technology acquisition; there are only one-quarter the number of MRIs per capita in Canada as there are in the United States. Similarly, there is a budget for hospitals and a fee schedule for physicians, where fees are set so that a total spending target is met. The ability to do this in the United States is in some doubt … If the United States does not implement the type of budget and technology regulation that other countries have, the impact of single-payer health care on administrative costs is less certain.

Given the political and practical difficulties of tackling health care administrative costs via a single payer system, Scheinker, Richman, Milstein, and Schulman go through a modelling exercise where they try to separate out the main billing and insurance-related (BIR) costs across “three components of BIR costs (fixed costs, per-visit clinical documentation variable costs, and per-visit nonclinical documentation variable costs) associated with five types of visits (primary care, emergency department visits, inpatient stays, ambulatory surgery, and inpatient surgery).” They then consider a list of ways of standardizing and simplifying current billing procedures, together with how these might happen in a single-payer or in a multi-payer health care system. They conclude:  

Our model estimates that national BIR [billing and insurance-related] costs are reduced between 33% and 53% in Medicare-for-All style single-payer models and between 27% and 63% in various multi-payer models. Under a wide range of assumptions and sensitivity analyses, standardizing contracts generates larger savings with less variance than savings from single-payer strategies. … Although moving toward a single-payer system will reduce BIR costs, certain reforms to payer-provider contracts could generate at least as many administrative cost savings without radically reforming the entire health system. BIR costs can be meaningfully reduced without abandoning a multi-payer system.

I lack the expertise in health care billing systems to offer any useful insight into the realism of their claims, but for what it’s worth, all of the authors are affiliated with the Clinical Excellence Research Center at Stanford’s medical school.

In the essay mentioned above, David Cutler proposes that these kinds of changes might be implemented via a national health care automated clearinghouse for payments to health care providers: that is, all health care providers would submit their bills to the clearinghouse, and all insurers would pay the bills as submitted through the clearinghouse. Cutler makes the argument this way:

I propose that Congress establish a clearinghouse for bill submission … More than 6 billion medical claims are filed annually in the United States. While almost all of these claims are filed electronically, the system is not as efficient as it could be. The issue is sometimes posed as the need for a single claim form, but that is not correct. The HIPAA legislation of 1996 required standardized claims forms and that has now been achieved. Nevertheless, the system still has some limitations. To begin, the information required by different payers can be different, even with the same form. For example, one insurer might require special revenue codes for particular specialties that are different from other insurers. In other cases, the physician’s specialty may differ across insurers (medicine vs. gastroenterology, for example), which could necessitate a different set of codes. Or the codes given for claim denial may differ across insurers. And still other insurers are not required to use these standardized forms (e.g., workers’ compensation and auto insurers). In addition, many insurers require attachments to claims, and these attachments are generally not standardized. An attachment might involve a certificate of medical necessity, a discharge summary, or details of a lab report … [O]nly 20 percent of claims attachments are standardized. One of the primary reasons why attachments are not standardized is that a federal standard has not yet been named by the Department of Health and Human Services as required under HIPAA and the ACA …

Cutler goes on to point out opportunities for streamlining issues like getting prior authorization for procedures and for documenting quality control. He suggests that any health care providers involved with government payments could be required to send their bills through the national system, and those who do not wish to submit payments through the system would be charged a modest fee for doing so.

Making these kinds of changes to health care billing and administration, whether it is done through a national clearinghouse or in some other way, isn’t a deep conceptual problem or one involving complex tradeoffs. In a well-functioning political system, it seems to me like the kind of detailed, nuts-and-bolts problem that could be addressed by detailed, nut-and-bolts politicians and administrators beavering away at the task on a bipartisan basis until it’s done.

Summer 2021 Journal of Economic Perspectives Available Online

I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download individual articles or the entire issue, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Summer 2021 issue, which in the Taylor household is known as issue #137. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the next week or two, as well.

_________________________________________________________

Symposium on COVID-19
“Effects of the COVID-19 Recession on the US Labor Market: Occupation, Family, and Gender,” by Stefania Albanesi and Jiyeon Kim
The economic crisis associated with the emergence of the novel corona virus is unlike standard recessions. Demand for workers in high contact and inflexible service occupations has declined while parental supply of labor has been reduced by lack of access to reliable child care and in-person schooling options. This has led to a substantial and persistent drop in employment and labor force participation for women, who are typically less affected by recessions than men. We examine real-time data on employment, unemployment, labor force participation and gross job flows to document the impact of the pandemic by occupation, gender and family status. We also discuss the potential long-term implications of this crisis, including the role of automation in depressing the recovery of employment for the worst hit service occupations.
Full-Text Access | Supplementary Materials
“The Great Unequalizer: Initial Health Effects of COVID-19 in the United States,” by Marcella Alsan, Amitabh Chandra and Kosali Simon
We measure inequities from the COVID-19 pandemic on mortality and hospitalizations in the United States during the early months of the outbreak. We discuss challenges in measuring health outcomes and health inequality, some of which are specific to COVID-19 and others that complicate attribution during most large health shocks. As in past epidemics, preexisting biological and social vulnerabilities profoundly influenced the distribution of disease. In addition to the elderly, Hispanic, Black and Native American communities were disproportionately affected by the virus, particularly when assessed using the years of potential life lost metric. We provide a conceptual framework and initial empirical analysis that seek to shed light on contributors to pandemic-related health inequality, and we suggest areas for future research.
Full-Text Access | Supplementary Materials
“Tracking the Pandemic in Real Time: Administrative Micro Data in Business Cycles Enters the Spotlight,” by Joseph Vavra
In this paper I discuss the increasingly prominent role of administrative micro data in macroeconomics research. This type of data proved important for interpreting the causes and consequences of the Great Recession, and it has played a crucial role in shaping economists’ understanding of the COVID-19 pandemic in near real-time. I discuss a number of specific insights from this research while also illustrating some of the broader opportunities and challenges of working with administrative data.
Full-Text Access | Supplementary Materials
Symposium on the Washington Consensus Revisited
“Some Thoughts on the Washington Consensus and Subsequent Global Development Experience,” by Michael Spence
This paper discusses the Washington Consensus, its origins, and its insights in terms of subsequent development experience in a broad range of countries. I continue to find that when properly interpreted as a guide to the formulation of country-specific development strategies, the Washington Consensus has withstood the test of time quite well. In my view, subsequent experience, especially in Asia, reveals a number of places where a shift in emphasis would be warranted. Finally, I try to identify some misuses of the Washington Consensus and suggest that it was vulnerable to misuse due to the absence of an accompanying and explicit development model.
Full-Text Access | Supplementary Materials
“The Baker Hypothesis: Stabilization, Structural Reforms, and Economic Growth,” by Anusha Chari, Peter Blair Henry and Hector Reyes
In 1985, James A. Baker III’s “Program for Sustained Growth” proposed a set of economic policy reforms including, inflation stabilization, trade liberalization, greater openness to foreign investment, and privatization, that he believed would lead to faster growth in countries then known as the Third World, but now categorized as emerging and developing economies (EMDEs). A country-specific, time-series assessment of the reform process reveals three clear facts. First, in the ten-year period after stabilizing high inflation, the average growth rate of real GDP in EMDEs is 2.6 percentage points higher than in the prior ten-year period. Second, the corresponding growth increase for trade liberalization episodes is 2.66 percentage points. Third, in the decade after opening their capital markets to foreign equity investment, the spread between EMDEs average cost of equity capital and that of the US declines by 240 basis points. The impact of privatization is less straightforward to assess, but taken together, the three central facts of reform provide empirical support for the Baker Hypothesis and suggest a simple neoclassical interpretation of the unprecedented increase in growth that has taken place in EMDEs since the early 1990s.
Full-Text Access | Supplementary Materials
“Washington Consensus in Latin America: From Raw Model to Straw Man,” by Ilan Goldfajn, Lorenza Martínez and Rodrigo O. Valdés
We take stock of three decades of a love-hate relationship between Latin American policies and the Washington Consensus, reviewing its implementation, national debate, and outcomes. Using regional data and case studies of Brazil, Chile, and Mexico, we discuss the various degrees of the Washington Consensus implementation and evaluate performance. We find mixed results: macroeconomic stability is much improved, but economic growth has been heterogeneous and generally disappointing, despite improvement relative to the 1980s. We discuss the risk that the region could revert parts of the Washington Consensus reforms, which are necessary building blocks for a new agenda more focused on social integration, a fairer and just society, and environmentally sustainable growth based on better education.
Full-Text Access | Supplementary Materials
“Washington Consensus Reforms and Lessons for Economic Performance in Sub-Saharan Africa,” by Belinda Archibong, Brahima Coulibaly and Ngozi Okonjo-Iweala
Over three decades after market-oriented structural reforms termed “Washington Consensus” policies were first implemented, we revisit the evidence on policy adoption and the effects of these policies on socio-economic performance in sub-Saharan African countries. We focus on three key ubiquitous reform policies around privatization, fiscal discipline, and trade openness and document significant improvements in economic performance for reformers over the past two decades. Following initial declines in per capita economic growth over the 1980s and 1990s, reform adopters experienced notable increases in per capita real GDP growth in the post–2000 period. We complement aggregate analysis with four country case studies that highlight important lessons for effective reform. Notably, the ability to implement pro-poor policies alongside market-oriented reforms played a central role in successful policy performance.
Full-Text Access | Supplementary Materials
Symposium on Statistical Significance
“Statistical Significance, p-Values, and the Reporting of Uncertainty,” by Guido W. Imbens
The use of statistical significance and p-values has become a matter of substantial controversy in various fields using statistical methods. This has gone as far as some journals banning the use of indicators for statistical significance, or even any reports of p-values, and, in one case, any mention of confidence intervals. I discuss three of the issues that have led to these often-heated debates. First, I argue that in many cases, p-values and indicators of statistical significance do not answer the questions of primary interest. Such questions typically involve making (recommendations on) decisions under uncertainty. In that case, point estimates and measures of uncertainty in the form of confidence intervals or even better, Bayesian intervals, are often more informative summary statistics. In fact, in that case, the presence or absence of statistical significance is essentially irrelevant, and including them in the discussion may confuse the matter at hand. Second, I argue that there are also cases where testing null hypotheses is a natural goal and where p-values are reasonable and appropriate summary statistics. I conclude that banning them in general is counterproductive. Third, I discuss that the overemphasis in empirical work on statistical significance has led to abuse of p-values in the form of p-hacking and publication bias. The use of pre-analysis plans and replication studies, in combination with lowering the emphasis on statistical significance may help address these problems.
Full-Text Access | Supplementary Materials
“Of Forking Paths and Tied Hands: Selective Publication of Findings, and What Economists Should Do about It,” by Maximilian Kasy
A key challenge for interpreting published empirical research is the fact that published findings might be selected by researchers or by journals. Selection might be based on criteria such as significance, consistency with theory, or the surprisingness of findings or their plausibility. Selection leads to biased estimates, reduced coverage of confidence intervals, and distorted posterior beliefs. I review methods for detecting and quantifying selection based on the distribution of p-values, systematic replication studies, and meta-studies. I then discuss the conflicting recommendations regarding selection resulting from alternative objectives, in particular, the validity of inference versus the relevance of findings for decision-makers. Based on this discussion, I consider various reform proposals, such as deemphasizing significance, pre-analysis plans, journals for null results and replication studies, and a functionally differentiated publication system. In conclusion, I argue that we need alternative foundations of statistics that go beyond the single-agent model of decision theory.
Full-Text Access | Supplementary Materials
“Evidence on Research Transparency in Economics,” by Edward Miguel
A decade ago, the term “research transparency” was not on economists’ radar screen, but in a few short years a scholarly movement has emerged to bring new open science practices, tools and norms into the mainstream of our discipline. The goal of this article is to lay out the evidence on the adoption of these approaches—in three specific areas: open data, pre-registration and pre-analysis plans, and journal policies—and, more tentatively, begin to assess their impacts on the quality and credibility of economics research. The evidence to date indicates that economics (and related quantitative social science fields) are in a period of rapid transition toward new transparency-enhancing norms. While solid data on the benefits of these practices in economics is still limited, in part due to their relatively recent adoption, there is growing reason to believe that critics’ worst fears regarding onerous adoption costs have not been realized. Finally, the article presents a set of frontier questions and potential innovations.
Full-Text Access | Supplementary Materials
Articles and Features
“Why Is Growth in Developing Countries So Hard to Measure?” by Noam Angrist, Pinelopi Koujianou Goldberg and Dean Jolliffe
Occasional widely publicized controversies have led to the perception that growth statistics from developing countries are not to be trusted. Based on the comparison of several data sources and analysis of novel IMF audit data, we find no support for the view that growth is on average measured less accurately or manipulated more in developing than in developed countries. While developing countries face many challenges in measuring growth, so do higher-income countries, especially those with complex and sometimes rapidly changing economic structures. However, we find consistently higher dispersion of growth estimates from developing countries, lending support to the view that classical measurement error is more problematic in poorer countries and that a few outliers may have had a disproportionate effect on (mis)measurement perceptions. We identify several measurement challenges that are specific to poorer countries, namely limited statistical capacity, the use of outdated data and methods, the large share of the agricultural sector, the informal economy, and limited price data. We show that growth measurement based on the System of National Accounts (SNA) can be improved if supplemented with information from other data sources (for example, satellite-based data on vegetation yields) that address some of the limitations of SNA.
Full-Text Access | Supplementary Materials
“Retrospectives: James Buchanan: Clubs and Alternative Welfare Economics,” by Alain Marciano
James Buchanan wrote “An Economic Theory of Clubs” and invented clubs to support a form of welfare economics in which there is no social welfare function (SWF) and individual utility functions cannot be “read” by external observers. Clubs were a means to allow the implementation of individualized prices for public goods and services and to allow each individual to pay exactly the amount he wants to pay. He developed this project to answer and counter Paul Samuelson’s analysis of public goods, in which social welfare functions play a crucial role. Buchanan and Samuelson disagreed over the allocation of the costs of the public good to each individual. To Buchanan, it was by relying on individual’s preferences. To Samuelson, by using a SWF. Buchanan’s clubs are thus foreign and incompatible with the traditional Samuelson-style public economics in which they are used.
Full-Text Access | Supplementary Materials
“Recommendations for Further Reading,” by Timothy Taylor
Full-Text Access | Supplementary Materials