Some Economics of Place Effects

Studying how much you are affected by living in a certain place isn’t easy. After all, people are not first observed, then randomly distributed across places, then observed again. In more concrete terms, those who end up living in a place with high-priced real estate and those who end up living in a place with low-priced real estate may well differ in a number of ways in their backgrounds, education levels, and job history, and it would be peculiar to say that the places in which people ended up live “caused” these differences. Instead, it often seems more plausible that the place where people end up living is caused by earlier mechanisms of social and economic sorting.

Nonetheless, here are two examples of how knowing something about place effects seems important. Imagine that there is a program to provide support for people from low-income neighborhoods to move to higher-income neighborhoods. Will the shift in the place they are living affect their job prospects or the outcomes for their children? Or imagine that a retiree moves from one state to another. Will the shift in place affect their health or the health care they receive? A couple of papers in the Fall 2021 issue of the Journal of Economic Perspectives tackle such questions of place effects head-on.

(Full disclosure: I’ve been Managing Editor of the JEP since the first issue in 1987, and so may be predisposed to believe that the articles are of interest. Fuller disclosure: The JEP and all its articles back to the first issue have ben freely available online for a decade now, courtesy of the American Economic Association, so there is no financial benefit for me or anyone else from recommending the articles.)

It turns out that there are certain situations where one can look at those who changed places, compare them with a “control group” that did not change places, and draw some plausible conclusions. Eric Chyn and Lawrence F. Katz provide an overview and contextual interpretation of this research in “Neighborhoods Matter: Assessing the Evidence for Place Effects” (Journal of Economic Perspectives, 35:4, pp. 197-222). Here’s how they describe perhaps the most prominent study in this area:

Beginning in 1994, the Moving to Opportunity housing mobility demonstration randomized access to housing vouchers and assistance in moving to less-distressed communities to about 4,600 families living in public housing projects located in deeply impoverished neighborhoods in five cities: Baltimore, Boston, Chicago, Los Angeles, and New York. The program randomized families into three groups: 1) a low-poverty voucher group (also called the “experimental group”) that was offered housing-mobility counseling and restricted housing vouchers that could only be used to move to low-poverty areas (Census tracts with 1990 poverty rates below 10 percent); 2) a traditional voucher group that was offered regular Section 8 housing vouchers that had no additional locational constraints (also called the Section 8 group); and 3) a control group that received no assistance through the program.

As researchers began to think seriously about place effects, they sought out other situations in which people ended up moving out of low-income areas. For example, studies looked at situations where people had to move because public housing was demolished, or where Hurricane Katrina destroyed existing housing, in comparison to outcomes for a similar group that was not forced to move. There are also studies where one group of low-income households is offered information and counseling to help them match up with rental options in areas with higher average incomes, while a control group is not offered such assistance and is thus much less likely to move to those other areas. Notice that in one way or another, all of these approaches, in different ways, have an element of randomness which allow the researcher to make a plausible estimate of the causal effects of living in a different place.

The overall finding from this line of research is that when lower-income household move to a higher-income neighborhood, it has substantial effects for younger children who grow up in the new neighborhood, smaller or even nil effects for those who are older teenagers at the time of relocation, and not much effect on the job or income outcomes for the adults in those households. Interestingly, the positive effects for younger children don’t seem to primarily be reflected in school test scores, but instead show up as a result of gains in noncognitive skills as reflected in measures like numbers of school absences or suspensions, and chances of repeating a grade. The authors write: “Studies of the Moving to Opportunity demonstration and Chicago public housing demolitions found no evidence that relocating
to less distressed areas had impacts on the economic outcomes of adults, but both
settings revealed large long-run gains for younger children …”

In the same issue, Tatyana Deryugina and David Molitor discuss “The Causal Effects of Place on Health and Longevity” (Journal of Economic Perspectives, 35: 4, pp. 47-70). They point out that the size of regional differences in health across places are actually fairly similar across the European Union and the United States. They write:

Three main results emerge from comparing the regional variation in life expectancy in the United States and Europe. First, average life expectancy is 2.8 years higher in Europe than in the United States. Second, the overall variation in life expectancy, as captured by the standard deviation or interdecile range of the life expectancy distribution, is similar in both contexts. Third, most of the regional variation in life expectancy in Europe is explained by country of residence, whereas in the United States, most of the variation is within-state.

A primary issue is the problem of figuring out whether it’s the place that matters, or whether it’s the prior sorting of people by income and occupation–which is often correlated with place. For a given place, health could also be affected by issues like local public health policies, or peer effects (like whether there is a culture of substance abuse or outdoor exercise), or by local environmental contamination. In addition, these factors often overlap: for example, a location with lower incomes may also receive less health care, be more affected by environmental pollution, and have peer effects that do not reinforce good health. It’s not obvious that “place” is always the most useful way to think about these kinds of factors, rather than focusing on the underlying determinants. The authors mention a classic example from the research in this area:

As a vivid example of geographic differences in mortality across the United States, Fuchs (1974) compared mortality rates in Nevada and Utah, which are neighboring states with similar climates and, at the time, similar income levels and physicians per capita. Fuchs noted that, nonetheless, adult mortality rates were substantially higher in Nevada than in Utah, which he attributed to Nevada’s high rates of cigarette and alcohol consumption as well as “marital and geographical instability.” Even today, the average person born in Utah has a life expectancy 1.9 years higher than the average person born in Nevada.

But broadly speaking, the approach is to look at movers between areas, and then to think carefully about what what can learn from such comparisons. For example, one source of data is to look at the elderly, who are covered by Medicare insurance, who move between states. Patterns of medical practice vary across the US, so you can observe Medicare patients with similar earlier patterns of health care use who move to a place where Medicare recipients on average get more care, or where they get less care, or those who don’t move at all. One can also look at doctors who move between areas and, looking at doctors who had similar patterns of Medicare charges before their move, see if their pattern of Medicare charges tend to stay the same when they go to a different place. As the authors write:

Other indirect evidence that local conditions matter for health comes from papers that use movers to study how local conditions affect health care provision and other non-health outcomes that could ultimately affect health. For example, Song et al. (2010) show that when Medicare recipients move between regions, rates of medical diagnoses change. Finkelstein, Gentzkow, and Williams (2016) study Medicare recipients who move between areas and show that place of residence affects movers’ medical spending. Molitor (2018) looks at cardiologists who move and finds that, on average, their own practice patterns change by 60–80 percent of the difference in local norms between their new and original practice regions.

There are lots of complexities when looking a place effects of health. So far, the research based on movers from one place to another (that is, not on correlations looking at health effects in different places) suggests that place effects on health are real, but does not yet give strong answers as to why they are real. The authors write:

The observed geographic dispersion in life expectancy and evidence from movers between areas strongly suggest that where one lives matters for when one dies. Determining whether place health effects are large or trivially small, however, has not been accomplished until very recently. New evidence comparing movers to other movers to estimate place health effects make it reasonable to conclude that, at least for some groups, place of residence has a sizable effect on health. However, more research is needed to build on these findings and, in particular, to understand the effect of place at younger ages on long-term longevity. Although there are many plausible mechanisms through which these place effects may materialize, the question of what it is exactly that causes some places to be better for health than others has so far not been answered directly by any existing study.

Forks: A Story of Technological Diffusion

The Thanksgiving holiday is a time when many of us make good use of our forks. Thus, it seems like an appropriate time to pass along this short note, “Introduction of Forks,” from the March 1849 issue of Stryker’s Quarterly Register and Magazine (pp. 204-5).

As any well authenticated account of the invention or introduction of any of our present customs, or modes of living, cannot but be both instructive and amusing, we insert the following account of the first introduction of the table fork into England, as related by Thomas Corgate in his book of travels through a part of Europe, A.D. 1608.

“Here I will mention what might have been spoken of before in discourse of the first Italian towne. I observed a custom in all those Italian cities and towns through which I passed, that is not used in any other country that I saw in my travels, neither doe I thinke that any other nation of Christendom doth use it, but only Italy. The Italian, and also most strangers that are commorant in Italy, doe alwaies at their meales use a little forke when they cut their meate. For while with their knife, which they hold in one hand, they cut the meate out of the dish, they fasten the fork, which they hold in the other hand, upon the same dish: so that whatsever he be that, setting in companie with any others at meale, should unadvisedly touch the dish of meate with his fingers, from which all the table doe cut, he will give occasion of offence to the companie, as having transgressed the laws of good manners, insomuch that for his error he shall be at least brow-beaten if not reprehended in wordes. This forme of feeding I understand is generally used in all places of Italy; their forkes being for the most part made of yron or steele and some of silver, but these are only used by gentlemen. The reason of this their curiosity is, because the Italian cannot by any meanes indure to have his dish touched with fingers, seeing all men’s fingers are not alike cleane. Hereupon I myself thought good to imitate the Italian fashion by this forked manner of cutting meate, not only while I was in Italy, but also in Germany, and oftentimes in England, since I came home, being once equipped for that frequent using of my forke, and by a certaine learned gentleman, a familiar friend of mine, one Mr. Lawrence Whitaker, who in his merry humour doubted not to call me at table furcifer, only for using a forke at feeding. and for no other cause.”

Of course, the invention and use of the fork goes back much earlier. Chad Ward provides a useful overview in “Origins of the Common Fork” (Leite’s Culinaria, May 6, 2009). For example:

Forks were in use in ancient Egypt, as well as Greece and Rome. However, they weren’t used for eating, but were, rather, lengthy cooking tools used for carving or lifting meats from a cauldron or the fire. Most diners ate with their fingers and a knife, which they brought with them to the table. Forks for dining only started to appear in the noble courts of the Middle East and the Byzantine Empire in about the 7th century and became common among wealthy families of the regions by the 10th century. Elsewhere, including Europe, where the favored implements were the knife and the hand, the fork was conspicuously absent.

Imagine the astonishment then when in 1004 Maria Argyropoulina, Greek niece of Byzantine Emperor Basil II, showed up in Venice for her marriage to Giovanni, son of the Pietro Orseolo II, the Doge of Venice, with a case of golden forks—and then proceeded to use them at the wedding feast. They weren’t exactly a hit. She was roundly condemned by the local clergy for her decadence, with one going so far as to say, “God in his wisdom has provided man with natural forks—his fingers. Therefore it is an insult to him to substitute artificial metal forks for them when eating.”

When Argyropoulina died of the plague two years later, Saint Peter Damian, with ill-concealed satisfaction, suggested that it was God’s punishment for her lavish ways. “Nor did she deign to touch her food with her fingers, but would command her eunuchs to cut it up into small pieces, which she would impale on a certain golden instrument with two prongs and thus carry to her mouth. . . . this woman’s vanity was hateful to Almighty God; and so, unmistakably, did He take his revenge. For He raised over her the sword of His divine justice, so that her whole body did putrefy and all her limbs began to wither.”

Doomed by God for using a fork. Life was harsh in the 11th century.

However, forks did gradually gain a foothold in Italy, and then France, and then crossed the English Channel with Thomas Corgate, as noted above. The early United States had few forks, so that most people ate with the combined efforts of a knife and a spoon, but even among the Americans, forks had apparently become standard practice by about 1850.

An Economist Chews over Thanksgiving

 As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change in turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there’s anything wrong with that. [This is an updated, amended, elongated, and cobbled-together version of a post that was first published on Thanksgiving Day 2011.]

The last time the U.S. Department of Agriculture did a detailed “Overview of the U.S. Turkey Industry\” appears to be back in 2007, although an update was published in April 2014  Some themes about the turkey market waddle out from those reports on both the demand and supply sides.

On the demand side, the quantity of turkey per person consumed rose dramatically from the mid-1970s up to about 1990, but then declined somewhat, but appears to have made a modest recovery in the last few years The figure below was taken from the Eatturkey.com website run by the National Turkey Federation a couple of years ago.

Turkey companies are what economists call “vertically integrated,” which means that they either carry out all the steps of production directly, or control these steps with contractual agreements. Over time, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which the steps of turkey production have become separated and specialized–with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys. Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:

In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg capacity per hatchery in 2007.

Some decades ago, turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised while supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.

Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.

Turkeys have been carefully bred to become the efficient meat producers they are today. In 1986, a turkey weighed an average of 20.0 pounds. This average has increased to 28.2 pounds per bird in 2006. The increase in bird weight reflects an efficiency gain for growers of about 41 percent.\”

The 2014 USDA report points out that the capacity of eggs per hatchery has continued to rise (again, references to charts omitted):

For several decades, the number of turkey hatcheries has declined steadily. During the last six years, however, this decrease began to slow down. As of 2013, there are 54 turkey hatcheries in the United States, down from 58 in 2008, but up from the historical low of 49 reached in 2012. The total capacity of these facilities remained steady during this period at approximately 39.4 million eggs. The average capacity per hatchery reached a record high in 2012. During 2013, average capacity per hatchery was 730 thousand (data records are available from 1965 to present).

U.S. agriculture is full of examples of remarkable increases in yields over periods of a few decades, but such examples always drop my jaw. I tend to think of a “turkey” as a product that doesn’t have a lot of opportunity for technological development, but clearly I’m wrong. Here\’s a graph showing the rise in size of turkeys over time from the 2007 report.

The production of turkey is not a very concentrated industry with three relatively large producers (Butterball, Jennie-O, and Cargill Turkey & Cooked Meats) and then more than a dozen mid-sized producers.    Given this reasonably competitive environment, it’s interesting to note that the price markups for turkey–that is, the margin between the wholesale and the retail price–have in the past tended to decline around Thanksgiving, which obviously helps to keep the price lower for consumers. However, this pattern may be weakening over time, as margins have been higher in the last couple of Thanksgivings  Kim Ha of the US Department of Agriculture spells this out in the “Livestock, Dairy, and Poultry Outlook” report of November 2018. The vertical lines in the figure show Thanksgiving. She writes: “In the past, Thanksgiving holiday season retail turkey prices were commonly near annual low points, while wholesale prices rose. … The data indicate that the past Thanksgiving season relationship between retail and wholesale turkey prices may be lessening.”

For some reason, this entire post is reminding me of the old line that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. Did I mention that I make an excellent chestnut stuffing?

Anyway, the starting point for measuring inflation is to define a relevant “basket” or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical US household buys. But one can also define a more specific basket of goods if desired, and since 1986, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:

The cost of buying the Classic Thanksgiving Dinner rose 6% from from 2020 to 2021, The top line of the graph that follows shows the nominal price of purchasing the basket of goods for the Classic Thanksgiving Dinner. The lower line on the graph shows the price of the Classic Thanksgiving Dinner adjusted for the overall inflation rate in the economy. The lower line is relatively flat, which means that inflation in the Classic Thanksgiving Dinner has actually been an OK measure of the overall inflation rate.

Thanksgiving is a distinctively American holiday, and it\s my favorite. Good food, good company, no presents–and all these good topics for conversation. What\’s not to like?

Holding Down Costs of Megaprojects: The Madrid Subway Example

Imagine that at some point in your life, for your sins, you part of a panel to evaluate different methods of construction for a megaproject–say, an extension of a light-rail public transit system. Let’s also say that your goals are that the project should get done reasonably soon and at reasonable cost. What are some flashing red lights that suggest the plans may be going off-kilter?

Bent Flyvbjerg tackles this question in his essay, “Make Megaprojects More Modular,” written for the November-December 2021 issue of the Harvard Business Review. The key lessons are suggested by the subheading is: “Repeatable design and quick iterations can reduce costs and risks and get to revenues faster.” If someone starts talking about calling in famous architects for new designs, or famous artists for unique decorations, be wary. If they start talking about custom-designed components, be even more wary. Sure, be a little creative if you can. But design the project to happen in reasonably sized bites, each one briskly implemented, using readily available off-the-shelf components and technology. Flyvbjerg gives several examples of success stories. One is from the guy responsible for expanding the subway system in Madrid.

Manuel Melis Maynar understands the importance of scalability. An experienced civil engineer and the president of Madrid Metro, he was responsible for one of the largest and fastest subway expansions in history. Subway construction is generally seen as custom and slow by nature. It can easily take 10 years from the decision to invest in a new line until trains start running, as was the case with Copenhagen’s recent City Circle Line. And that’s if you don’t encounter problems, in which case you’re looking at 15 to 20 years, as happened with London’s Victoria line. Melis figured there had to be a better way, and he found it.

Begun in 1995, the Madrid subway extension was completed in two stages of just four years each (1995 to 1999: 56 kilometers of rail, 37 stations; 1999 to 2003: 75 kilometers, 39 stations), thanks to Melis’s radical approach to tunneling and station building. In project management terms, it offers a stark contrast to the experience of the Eurotunnel, which has cost its investors dearly. Melis’s success was the result of applying three basic rules to the design and management of the project.

No monuments. Melis decided that no signature architecture would be used in the stations, although such embellishment is common, sometimes with each station built as a separate monument. (Think Stockholm, Moscow, Naples, and London’s Jubilee line.) Signature architecture is notorious for delays and cost overruns, Melis knew, so why invite trouble? His stations would each follow the same modular design and use proven cut-and-cover construction methods, allowing replication and learning from station to station as the metro expanded.

No new technology. The project would eschew new construction techniques, designs, and train cars. Again, this mindset goes against the grain of most subway planners, who often pride themselves on delivering the latest in signaling systems, driverless trains, and so on. Melis was keenly aware that new product development is one of the riskiest things any organization can take on, including his own. He wanted none of it. He cared only for what worked and could be done fast, cheaply, safely, and at a high level of quality. He took existing, tried-and-tested products and processes and combined them in new ways. Does that sound familiar? It should. It’s the way Apple innovates, with huge success.

Speed. Melis understood that time is like a window. The bigger it is, the more bad stuff can fly through it, including unpredictable catastrophic events, or so-called black swans. … Traditionally, cities building a metro would bring in one or two tunnel-boring machines to do the job. Melis instead calculated the optimal length of tunnel that one boring machine and team could deliver—typically three to six kilometers in 200 to 400 days—divided the total length of tunnel he needed by that amount, and then hired the number of machines and teams required to meet the schedule. At times, he used up to six machines at once, completely unheard of when he first did it. His module unit was the optimal length of tunnel for one machine, and like the station modules, the tunnel modules were replicated over and over, facilitating positive learning. As an unforeseen benefit, the tunnel-boring teams began to compete with one another, accelerating the pace further. They’d meet in Madrid’s tapas bars at night and compare notes on daily progress, making sure their team was ahead, transferring learning in the process. And by having many machines and teams operating at the same time, Melis could also systematically study which performed best and hire them the next time around. More positive learning. A feedback system was set up to avoid time-consuming disputes with community groups, and Melis persuaded them to accept tunneling 24/7, instead of the usual daytime and weekday working hours, by asking openly if they preferred a three-year or an eight-year tunnel-construction period.

The worry, of course, is that rules like this will make the megaproject boring and perhaps low-quality. But quality problems are often what arise when using new designs, custom parts, and new technologies–especially when it comes time to repair and replace parts of the system. Also, is the purpose of the megaproject to do a job for society–in this case to get people from A to B–or is it to give politicians a pretty place for a press conference? Flyvberg writes:

But go to Madrid and you will find large, functional, airy stations and trains—nothing like the dark, cramped catacombs of London and New York. Melis’s metro is a workhorse, with no fancy technology to disrupt operations. It transports millions of passengers, day in and day out, year after year, exactly as it is supposed to do. Melis achieved this at half the cost and twice the speed of industry averages—something most thought impossible.

Infrastructure Projects: History of Understated Costs and Overstated Benefits

It sometimes seems as if every big infrastructure project underestimates its costs and overpromises its benefits. But is that a few bad apples that get a lot of publicity, or is it the real overall pattern? Bent Flyvbjerg and Dirk W. Bester put together some evidence in “The Cost-Benefit Fallacy: Why Cost-Benefit Analysis Is Broken and How to Fix It” (Journal of Benefit-Cost Analysis, published online October 11, 2021).

They collect “a sample of 2062 public investment projects with data on cost and benefit
overrun. The sample includes eight investment types: Bridges, buildings, bus rapid
transit (BRT), dams, power plants, rail, roads, and tunnels. Geographically, the
sample incorporates investments in 104 countries on six continents, covering both
developed and developing nations, with the majority of data from the United States
and Europe. Historically, the data cover almost a century, from 1927 to 2013.” Not all of these studies have data on both expected and realized benefits and costs. But here’s a table summarizing the results: On average, there are cost overruns in every category, and overstated benefits in every category. In more detailed results, they show that this pattern hasn’t evolved much over time.

Of course, the average doesn’t apply to every project. Indeed, sometimes there are cost overruns but then even bigger benefits than expected. But the average pattern is disheartening. Indeed, “[c]onsidering cost and benefit overrun together, we see that the detected biases work in such a manner that cost overruns are not compensated by benefit overruns, but quite the opposite, on average. We also see that investment types with large
average cost overruns tend to have large average benefit shortfalls.”

The essential problem here, Flyvbjerg and Bester argue, is that those who do these estimates of benefits and costs are overly optimistic.

The root cause of cost overrun, according to behavioral science, is the well-documented fact that planners and managers keep underestimating the importance of schedules, geology, market prices, scope changes, and complexity in investment after investment. From the point of view of behavioral science, the mechanisms of scope changes, complex interfaces, archeology, geology, bad weather, business cycles, and so forth are not unknown to public investment planners, just as it is not unknown to planners that such mechanisms may be mitigated. However, planners often underestimate these mechanisms and overestimate the effectiveness of mitigation measures, due to well-known behavioral phenomena like overconfidence bias, the planning fallacy, and strategic misrepresentation.

Thus, the question is how to get those estimating the benefits and costs of mega-projects to be more realistic. The authors offer some suggestions.

“Reference class forecasting” is the idea of basing your estimates on look at actual costs and benefits that happened with similar projects in other places, representing a range of better and worse outcomes. Another idea is to give the benefit-cost forecasters some “skin in the
game.” “Lawmakers and policymakers should develop institutional setups that reward
forecasters who get their estimates right and punish those who do not.” This can be done in friendly ways, with bonuses, or it can be done in punitive ways, with lawsuits and even crimial punishments when things go badly wrong. There can be a rule in advance that independent audits will be carried out during and after the project–perhaps even by several different auditors. Finally, the decisions about whether to proceed shouldn’t just involve technocrats and spreadsheets, but also need public involvement. For example, if the public is going demand processes that slow down or complicate a project, that needs to be taken into account at the start–even if those demands may seem irrelevant or counterproductive to the forecasters.

The authors note that these kinds of changes are being used in various places and by various governments around the world. If there are plans for a megaproject where you live, you might want to think about whether these processes might lead to more accurate benefit-cost estimates, too.

Sorting Men and Women by College Major and Occupation

Men and women tend to sort into different college majors. Even given the same college major, they tend to sort into different jobs. Carolyn M. Sloane, Erik G. Hurst, and Dan A. Black explore these patterns, and some implications for wage differences between men and women, in “College Majors, Occupations, and the Gender Wage Gap” (Journal of Economic Perspectives, Fall 2021, 35:4, pp, 223-48).

(Full disclosure: I’ve been the Managing Editor at JEP since the first issue in 1987, so I am perhaps predisposed to think the articles are of wider interest. Fuller disclosure: All JEP articles back to the first issue have been freely available online for a decade now, courtesy of the American Economic Association, so neither I nor anyone else get any direct financial benefit if you choose to check out the journal.)

For example, here are some broad groupings of majors that tend to be male- or female-dominated. The left-hand panel shows majors where the female/male share of majors started at less than 1. Biological sciences has risen above one, and is now majority female, but the other majors have changed much less. The right-hand panel shows broad categories majors where the female/male share of majors started above one–in some cases, several multiples above one. None of these areas have switched to majority male: in one area, psychology, the female dominance has become much more pronounced

Sloane, Hurst, and Black aren’t trying to explain why these patterns arose or why they persist (although that’s an obvious topic for speculation!). Instead, they are interested in pointing out the extent of this difference, its persistence over time, and pointing out that the male-dominated majors on average have higher wages. Their data breaks down the broad major categories into 134 detailed majors: for example, the category of “Engineering” contains 17 different majors. They write:

We find that women are systematically sorted into majors with lower potential wages relative to men. For example, Aerospace Engineering, one of the highest potential wage majors, is 88 percent male, while Early Childhood Education, one of the lowest potential wage majors, is 97 percent female. We also find that such patterns are long-standing and have been slow to converge. Overall, college-educated women born in the 1950s matriculated with majors that had potential wages 12 percent lower than men from their cohort. That gap fell to about 9 percent for the 1990 birth cohort. Even after some convergence in major sorting between men and women during the last 40 years, the youngest birth cohorts of women are still sorted into majors with lower potential wages than their male peers. Intriguingly, much of the convergence in major sorting between men and women occurred between the 1950 and 1975 birth cohorts, with a modest divergence for recent cohorts.

The authors use an interesting method of comparing wages across majors. For every major, they look at the median wages paid to a middle-aged, US-born, white male in that category. Thus, they are not trying to measure gaps between female and male wages, or the extent of discrimination. Instead, they are noting that wages are lower in female-dominated majors even if one just compares white men of the same age with different majors.

They then take the idea of sorting one step further. Men and women who have the same major tend to sort into different occupations. Here’s a table illustrating this pattern. The top category shows that for education majors, 68% of women end up as teachers, compared with 50% of men. But among education majors, 18% of men end up in executive/manager jobs, compared with 9% of women. Similarly, the next panel shows that among nursing/pharmacy majors, women are more likely to end up as nurses, while men are more likely to end up in executive/manager roles.

The authors have data on 251 distinct occupations. They find that when women and men have the same majors, women have historically sorted into lower-paid occupations (again, just noting that this happens, while not investigating the question of how or why). For the effect of this occupational sorting for men and women with the same college major, they write:

{H]ow has occupational sorting conditional on major evolved across generations of US college graduates? We find that while women are sorted into occupations with lower potential wages conditional on major, this gap is closing somewhat over time. For the 1950 birth cohort, for example, women on average sorted to occupations with 11 percent lower potential earnings relative to otherwise similar men with the same majors. This gap narrowed to about 9 percent for the 1990 birth cohort. Almost all of the convergence occurred within highest potential earning majors. For example, women from the 1950 cohort who majored in Engineering—a high potential earning major—sorted into occupations with potential wages that were 14 percent lower than men from the same cohort who also majored in Engineering. For the 1990 birth cohort, however, women who majored in Engineering ended up working in occupations with roughly the same potential wages as their male peers.

Of course, these patterns of sorting by college major and occupation are also taking place against a backdrop of other changes: a rising share of women graduating from college, expansion of the US health care sector, falling birth rates, and so on. But the importance of sorting that happens early in life in college major and occupation has a lasting importance to later wages. The authors find that accounting for sorting by college major, and by occupation given the same major, can explain about 60% of the wage gap between men and women college graduates.

Some Eviction Economics

Part of the Coronavirus Aid, Relief, and Economic Security Act (the CARES Act) signed into law by President Trump on March 27, 2020, was a national moratorium on evictions. However, the moratorium was scheduled to end on July 24, 2020–although it effectively required an additional 30 days beyond that date before landlords could file notices to vacate. Congress did not vote to extend the moratorium. However, the Centers for Disease Control then announced a national eviction moratorium to start on September 4, 2020. The US Supreme Court held in August 2021 that the CDC lacked the power to make this policy decision without the passage of a law through Congress and signed by the president. Of course, the Supreme Court decision was not about whether the eviction moratoriums were good policy or had beneficial effects. Here, I set aside the legal questions and focus on what we know about the outcomes.

It’s worth saying at the start that data on rental evictions is not nationally centralized, and is not up-to-the-minute. Every study has its own sample. However, certain patterns do seem to emerge across studies. Jasmine Rangel, Jacob Haas, Emily Lemmerman, Joe Fish, and Peter Hepburn at The Eviction Lab at Princeton University provide evidence on overall eviction patterns in “Preliminary Analysis: 11 months of the CDC Moratorium” (August 21, 2021). Their project collects data from 31 cities and six full states, representing about one-fourth of all the renters in the country. Here’s their estimate based on the sites they trask of how the total number of evictions would have evolved starting in January 2020, compared to what actually happened. Evictions fall by about half starting in March 2020 , and the gap between expected and actual evictions continues to expand after the CDC moratorium is enacted in September 2020.

This drop in evictions has considerable local variation, in part because some states and cities enacted new eviction restrictions of their own. For example, here’s a figure showing the pattern across cities.

The researchers at the Eviction Lab also use data on the sites they track, together with historical data from the rest of the country, to make some overall estimates. They write: “In total, we estimate that federal, state, and local policies helped to prevent at least 2.45 million eviction filings since the start of the pandemic (March 15, 2020).” However, as far as I can tell, this estimate assumes that without the moratorium, evictions would have remained at pre-pandemic levels even after the pandemic started, which isn’t obvious. One can imagine that factors other than the moratorium made a difference, too.

What were some of the benefits of the eviction moratorium? The US Department of Housing and Urban Development published a magazine called Evidence Matters, with the Summer 2021 issue devoted to several articles on the theme of “Evictions.” The articles are heavily footnoted with references to published studies. The first article is titled “Affordable Housing, Eviction, and Health.”

The HUD article point out that the most recent national evidence from 2016 suggest that about 8 of every 100 renters got an eviction notice that year. It’s not clear how the number of official eviction notices translates into actual evictions: evidence from some cities suggests that informal evictions are more common than formal ones; evidence from other cities suggests that only about half of eviction notices lead to an actual eviction. Both of these confounding factors can be true, of course. The article notes (footnotes omitted):

Nonpayment of rent is the primary reason for eviction, which itself can arise from various causes, including rising rents combined with stagnant income growth and persistent poverty, job or income loss, or a sudden economic shock such as a health emergency or a car breakdown. Other reasons include lease violations, which can be technical in nature; property damage; and disruptions, such as police calls. Landlords, for their own reasons, may force tenants to move, either informally or through a legal “no-fault” eviction. Renters often are evicted over relatively small amounts of money — in many cases, less than a full month’s rent. … These studies built on the findings of local investigations. The Milwaukee Area Renters Study found higher rates of eviction for African-American, Latinx, and lower-income renters and renters with children. Neighborhood crime and eviction rates, the number of children in a household, and “network disadvantage” — defined by Desmond and Gershenson as “the proportion of one’s strong ties to people who are unemployed, addicted to drugs, in abusive relationships, or who have experienced major, poverty-inducing events (e.g., incarceration, teenage pregnancy) to increase his or her propensity for eviction” — are factors associated with an increased likelihood of eviction.

During the pandemic, some evidence suggests that the eviction moratorium, by keeping households in place, helped to limit the spread of COVID.

Eviction is a particular threat to health during a pandemic because, as Benfer explains, “we know that eviction results in doubling up, in couch surfing, in residing in overcrowded environments, in being forced to use public facilities, and, at the same time, not being able to comply with pandemic mitigation strategies like wearing a mask, cleaning your PPE [personal protective equipment], social distancing, and sheltering in place.” Epidemiological modeling under counterfactual scenarios comparing results with a strict moratorium against results without a moratorium suggests that evictions increase COVID-19 infection rates significantly. …

By studying COVID-19 incidence and mortality in 43 states and the District of Columbia with varying expiration dates for their eviction moratoria, Leifheit et al. found that “COVID-19 incidence was significantly increased in states that lifted their moratoriums starting 10 weeks after lifting, with 1.6 times the incidence…[and] 16 or more weeks after lifting their moratoriums, states had, on average, 2.1 times higher incidence and 5.4 times higher mortality.” The researchers conclude that, nationally, expiring eviction moratoria are associated with a total of 433,700 excess COVID-19 cases and 10,700 excess deaths. Another study estimates that, had eviction moratoria been implemented nationwide from March 2020 through November 2020, COVID-19 infection rates would have been reduced by 14.2 percent and COVID-19 deaths would have been reduced by 40.7 percent.

A certain amount of the literature on the eviction moratorium focuses so intensely on the outcomes for renters that it barely mentions landlords, although when it comes to understanding rental markets, this is like discussing the yin without the yang. But there are some exceptions. Elijah de la Campa, Vincent J. Reina and Christopher Herbert published “How Are Landlords Faring During the COVID-19 Pandemic? Evidence from a National Cross-Site Survey” (Joint Center for Housing Studies of Harvard University, August 2021), based on a national survey of landlords carried out from February to May 2021. The research group at JP Morgan Chase recently published “How did landlords fare during COVID?” (October 2021), which is based on Chase customers who are small business owners and who have indicates that they own a residential property that is rented out, or customers who have a mortgage from Chase on a multifamily or investment property. Both sources of data have their limitations, as noted above. But some overall patterns do emerge.

The studies both find that many landlords experienced real disruptions of income during the pandemic. The Harvard study found: “The share of landlords collecting 90 percent or more of yearly rent fell 30 percent from 2019 to 2020. … Ten percent of all landlords collected less than half of their yearly rent in 2020, with smaller landlords (1-5 units) most likely to have tenants deeply behind on rental payments.” The JP Morgan Chase study finds that “in the spring of 2020 … rental revenue for the median landlord was down about 20 percent relative to 2019.”

On the other side, for many landlords the eviction moratorium wasn’t as bad as it might have been. Many renters were receiving substantial payments from the government, and some of those flowed through to landlords. Some landlords who had mortgages on their rental properties were able to take advantage of policies to push back those payments for a time. Both the Harvard and the JP Morgan Chase studies find that many landlords also reduced or postponed their spending on maintenance. Others put their rental properties up for sale. The Harvard survey found: “The share of landlords deferring maintenance and listing their properties for sale also increased in 2020 (5 to 31 percent and 3 to 13 percent, respectively)…”

The federal government did allocate funds to help renters, and thus landlords, but that particular problem never really got off the ground. The JPMorgan study puts it this way (again, footnotes omitted):

Between the Emergency Rental Assistance Program and the American Rescue Plan Act, $46.5 billion of rental assistance has been made available by the federal government for states and localities to distribute. As of the end of September [2021], less than a quarter of the funds have been distributed. The distribution of these funds has been hampered by onerous paperwork requirements for both tenants and landlords to prove that tenants meet strict requirements to qualify for assistance, including matching information from the renter and the landlord. Among the many challenges, many of the most vulnerable tenants are not part of the formal rental market (e.g., subletting, renting illegal units, striking informal agreements, etc.) and are not able to provide the leases or other paperwork that is required of them to receive aid. Government officials have altered the rules of the program over time (e.g., allowing for self-attestation of need, providing advances while paperwork is processed, increasing flexibility for what the funds can be used for, etc.) to accelerate the process for getting assistance to needy families when it became clear that paperwork had become too much of a bottleneck. Such flexibility will be key to helping landlords as the pandemic drags on and keeping tenants in their homes as the expiration of various eviction moratoriums rapidly approaches. This need is especially acute for smaller landlords as they are more likely to supply affordable rental housing and rent shortfalls during the pandemic has caused more of them to sell their rental properties. Helping these landlords helps to preserve our supply of affordable housing.

Renters tend to haver lower income than landlords, and renters suffered more in financial terms during the pandemic than landlords. But given that the US rental housing markets are fundamentally based on private-sector rentals, the ability and willingness of landlords to be provide affordable rental housing in the future is important .

The issue of what rules should govern rental evictions existed before the pandemic, and will be a perennial topic moving ahead as well. Whatever the case for a national moratorium as a short-term or even a medium-term step, it surely can’t be a permanent policy. The HUD publication goes into these issues and discusses various local programs, although I don’t have a strong sense of what localities have rules that work better or worse. The HUD also offer some interesting evidence that evictions tend to be highly concentrated in certain areas and even among certain landlords in those areas. As the report notes: “Among the implications of these findings is that interventions targeted at the neighborhoods, buildings, and landlords responsible for significant numbers of evictions can have a profound impact.”

Renewable Energy and Its Need for Minerals

Solar panels and wind turbines are physical objects, and they need physical inputs. In particular, a dramatic expansion of solar and wind power will require a dramatic expansion of the production of a range of key minerals. The IMF includes a short “Special Feature” in its most recent World Economic Outlook (October 2021) on this subject: “Clean Energy Transition and Metals: Blessing or Bottleneck?” The IMF writes:

To limit global temperature increases from climate change to 1.5 degrees Celsius, countries and firms increasingly pledge to reduce carbon dioxide emissions to net zero by 2050. Reaching this goal requires a transformation of the energy system that could substantially raise the demand for metals. Low-greenhouse-gas technologies—including
renewable energy, electric vehicles, hydrogen, and carbon capture—require more metals than their fossil-fuel-based counterparts. …

Here’s a table showing a list of main “transition metals”–that is metals likely to be important in the energy transition.

Here’s a figure showing how the quantity of these metals needed is likely to rise by the 2030s. Notice that the left-hand axis is measured as a multiple of how much consumption of these metals by the 2030s will exceed consumption in the 2010s. Notice also that the first listed element, lithium, is being measured on the much higher right-hand axis.

Finally, here’s a figure showing how the production and reserves of four of these key transition metals are currently concentrated in a few countries–and the US does not appear as a major producer of any of them. Thus, an implication of the transition to current technologies of cleaner energy is US dependence on the countries shown here for key inputs, and not all these countries are both friendly and stable. The IMF report focuses on a subset of these metals: “The four representative metals chosen for in-depth analysis are copper, nickel, cobalt, and lithium. Copper and nickel are well-established metals. Cobalt and lithium are probably the most promising rising metals. In the IEA’s Net Zero by 2050 emissions scenario, total consumption of lithium and cobalt rises by a factor of more than six, driven by clean energy demand, while copper shows a twofold and nickel a fourfold increase in total consumption …”

Many of those who are most strongly in favor of a swift move to cleaner energy also have severe qualms about an increase in mining. Political conflicts thus arise. In northern Nevada, a company called Lithium Americas believes it has discovered at a location called Thacker Pass one of the world’s largest deposits of lithium. However, local protestors are pushing back hard against mining this lithium. The protesters clearly to not believe that the environmental concerns can be overcome. Indeed, one story notes: ” At the Thacker Pass camp, activists who call themselves `radical environmentalists’ hope that addressing these challenges will press nations to choose to drastically reduce car and electricity use to meet their climate goals rather than develop mineral reserves to sustain lifestyles that require more energy.”
 

I should perhaps emphasize that these kinds of extrapolations about long-run demand need to be treated with care. Such predictions are premised on current technology. If demand for these minerals spiked, and their prices spiked as well, it would presumably unleash a set of incentives for finding a way to conserve on their use, to find cheaper alternatives, to recycle from previous uses, and so on. But one can at least say that given current technology, green energy advocates face a dilemma here: to support a rapid expansion of clean-energy technologies, you need to also support substantial increases in mining operations. And when it comes to the environmental damage from such mining for transition metals, it’s worth remembering that the damage is likely to be considerably less if such operations are carried out in the United States, compared to if the expanded mining is done in some of the other countries that are main potential sources for such metals.

Opioid Overdoses: Worse Again

Deaths from overdoses, especially opioids, are getting worse. Here’s a graph from the Centers for Disease Control. Each point plots the cumulative deaths from drug overdoses in the previous 12 months. Thus, in January 2015 on the left-hand-side of the figure, there had been bout 50,000 drug overdose deaths in the previous 12 months. By April 2021, on the right-hand-side of the figure, there has been about 100,000 drug overdose deaths in the previous 12 months. The figure also shows that the problem seemed to have levelled out for awhile in 2018 and 2019, but with the pandemic in 2020 is started getting worse again.

David M. Cutler and Edward L. Glaeser offer a primer on how we got here in their article in the Fall 2021 issue of the Journal of Economic Perspectives: “When Innovation Goes Wrong: Technological Regress and the Opioid Epidemic.” (Full disclosure: I’ve been Managing Editor of this journal since the first issue in 1987. On the other side, all JEP articles have been freely accessible online for a decade now, so any personal benefit I receive from encouraging people to read them is highly indirect.)

Here’s an evolution of the problem in one graph. The blue line at the top is drug overdoses from all causes since 2020. The red dashed line shows overdoses just from opioids: the red line tracks the blue line, showing that the problem is fundamentally about opioids. The yellow dashed line shows overdoses from prescription opioids, and you can see that for about a decade after 2000, this was the main problem. Around 2010, when efforts were made to crack down on overprescribing prescription opioids, overdoses from heroin take off. Not long after that, overdoses from synthetic opioids like fentanyl and tramadol take off, and have been the main source of opioids overdoses in the last few years.

Cutler and Glaeser tell the story this way:

The opioid epidemic began with the availability of OxyContin in 1996. OxyContin was portrayed as a revolutionary wonder drug: because the painkiller was released only slowly into the body, relief would last longer and the potential for addiction would decline. From
1996 to 2011, legal opioid shipments rose six-fold. But the hoped-for benefits proved a mirage. Pain came back sooner and stronger than expected. Tolerance built up, which led to more and higher doses. Opioid use led to opioid abuse, and some took to crushing the pills and ingesting the medication all at once. A significant black market for opioids was born. Fifteen years after the opioid era began, restrictions on their use began to bind. From 2011 on, opioid prescriptions fell by one-third. Unfortunately, addiction is easier to start than stop. With reduced access to legal opioids, people turned to illegal ones, first heroin and then fentanyl, which has played a dominant role in the recent spike in opioid deaths.

How did Oxycontin get such a foothold? There’s plenty of blame to pass around. First, the government regulators who approved the drug deserve a slice of blame. The theory of oxycontin was that slow release would require less medication, and thus pose less harm. But as Cutler and Glaeser point out: “At the time of FDA approval and even after, no clinical trials backed up this theory.” Instead, the FDA relied on evidence that hospital inpatients didn’t tend to become addicted, without asking if the same would apply to outpatients. Cutler and Glaeser note:

The FDA generally requires at least two long-term studies of safety and efficacy in a particular condition before drug approval, but for OxyContin, the primary trial for approval was a two-week trial in patients with osteoarthritis. Even with this limited evidence, the FDA approved OxyContin “for the management of moderate to severe pain where use of an opioid analgesic is appropriate for more than a few days”—with no reference to any particular condition and no limit to short-term use. … Two examiners involved in OxyContin’s approval by the Food and Drug Administration went on to work for Purdue. When the FDA convened an advisory group in 2002 to examine the harms from OxyContin, eight of the ten experts had ties to pharmaceutical firms.

I’d also say that some of the doctors who overprescribed these medications deserve their share of the blame. There’s lots of evidence of how a big marking effort by Purdue encouraged doctors to prescribe oxycontin, but at the end of the day, it’s the doctors who actually did the prescribing, and some of them went far overboard. Cutler and Glaeser cite evidence that the top 5% of drug prescribers accounted for 58% of all prescriptions in Kentucky, 36 percent in Massachusetts, and 40% in California. The medical profession is well-aware that people have been getting addicted to opioids in various forms for centuries, and some greater skepticism was called for.

Roughly 700,000 Americans have dies of opioid overdoses since 1999. The isolation and stresses of the pandemic seems to have made the problem worse. It feels to me as if it’s become a cliche to refer to opioid overdoses as a “crisis,” but it’s a crisis that doesn’t seem to be receiving a crisis-level response. Cutler and Glaeser go into some detail on demand-side and supply-side determinants of the crisis, but I’ll let you go to their article for details. They conclude this way:

Past US public health efforts offer both hope and despair. Nicotine is an extremely addictive substance and yet smoking rates have fallen dramatically over the past five decades, because of both regulation and fear of death. On the other side, the harms of obesity are also well-known and average weights are still increasing. We cannot predict whether opioid addiction will decline like cigarette smoking or persist like obesity.

The medical use of opioids to treat pain will always involve costs and benefits, and the optimal level of opioid prescription is unlikely to be zero. The mistake that doctors and prescribers made in recent decades was to assume overoptimistically that a time release system would render opioids non-addictive. Thousands of years of experience with the fruits of the poppy should have taught that opioids have never been safe and probably never will be.

The larger message of the opioid epidemic is that technological innovation can go badly wrong when consumers, professionals, and regulators underestimate the downsides of new innovations and firms take advantage of this error. Typically, consumers can experiment with a new product and reject the duds, but with addiction, experimentation can have permanent consequences.

Here are some of my previous posts on what I will keep calling the opioid “crisis:”

Why Has Global Wealth Grown So Quickly?

The amount of wealth in an economy should be related to the amount of income. For example, wealth in real estate will be linked to the income that people have available to pay for housing. Wealth in the form of corporate stock should be linked to the profits of companies. From the 1970s up through the 1990s, for the global economy as a whole, total wealth was a multiple of about 4.2 times GDP. But for the last couple of decades, the ratio of wealth/GDP had been rising, and is now a multiple of about 6.1 times GDP. The McKinsey Global Institute lays out some facts and offers some possible interpretations in “The rise and rise of the global balance sheet: How productively are we using our wealth?” (November 15, 2021).

The report focuses on ten countries that make up about 60% of world GDP: Australia, Canada, China, France, Germany, Japan, Mexico, Sweden, the United Kingdom, and the United States. For each country, the wealth has three main components: “real assets and net worth; financial assets and liabilities held by households, governments, and nonfinancial corporations; and financial assets and liabilities held by financial corporations.”Here’s the pattern of wealth/GDP over time for those countries:

It’s interesting to note that the United States is not leading the way here in growth of national wealth. In the more detailed discussion, while it’s true that wealth in the form of real estate and corporate stock values has been rising in the US, it’s also true that foreign ownership of US wealth has been rising faster than US ownership of foreign wealth, which has kept the overall US ratio relatively unchanged.

The MGI report describes the overall dynamics this way:

Thus, the report notes: A central finding from this analysis is that, at the level of the global economy, the historical link between the growth of wealth, or net worth, and the value of economic flows such as GDP no longer holds. Economic growth has been sluggish over the past two decades in advanced economies, but net worth, which long tracked GDP growth, has soared in relation to it. This divergence has emerged as asset prices rose sharply—and are now almost 50 percent higher than the long-run average relative to income. The increase was not a result of 21st-century trends such as the increasing digitization of the economy. Rather, in an economy increasingly propelled by intangible assets, a glut of savings has struggled to find investments offering sufficient economic returns and lasting value to investors. These (ex-ante) savings have instead found their way into a traditional asset class, real estate, or into corporate share buybacks, driving up asset prices.

One possible explanation for this growth in wealth would be if these major world economies were going through a major investment boom, in which case the additional wealth might be a natural reflection of the much greater productive capacity. But that doesn’t seem to be the main story. Instead, most of the gain in wealth and most of the world’s wealth instead exists in the form of in real estate. The MGI report puts it this way:

Two-thirds of global net worth is stored in real estate and only about 20 percent in other fixed assets, raising questions about whether societies store their wealth productively. The value of residential real estate amounted to almost half of global net worth in 2020, while corporate and government buildings and land accounted for an additional 20 percent. Assets that drive much of economic growth—infrastructure, industrial structures, machinery and equipment, intangibles—as well as inventories and mineral reserves make up the rest. Except in China and Japan, non-real estate assets made up a lower share of total real assets than in 2000. Despite the rise of digitization, intangibles are just 4 percent of net worth: they typically lose value to competition and commoditization, with notable exceptions. Our analysis does not address nonmarket stores of value
such as human or natural capital.

What possible scenarios could emerge from this shift in the wealth/GDP ratio? There’s basically a happy interpretation and a not-so-happy one. The MGI report puts it this way:

In the first view, an economic paradigm shift has occurred that makes our societies wealthier than in the past relative to GDP. In this view, several global trends including aging populations, a high propensity to save among those at the upper end of the income spectrum, and the shift to greater investment in intangibles that lose their private value rapidly are potential game changers that affect the savings-investment balance. These together could lead to sustainably lower interest rates and stable expectations for the future, thereby supporting higher valuations than in the past. While there was no clear discernible upward trend of net worth relative to GDP at global level prior to 2000, cross-country variation was always large, suggesting that substantially different levels are possible. High equity valuations, specifically, could be justified by attributing more value to intangible assets, for instance, if corporations can capture the value of their intangibles investments more enduringly than the depreciation rates that economists assume. …

In the opposing view, this long period of divergence might be ending, and high asset prices could eventually revert to their long-term relationship relative to GDP, as they have in the past. Increased investment in the postpandemic recovery, in the digital economy, or in sustainability might alter the savings-investment dynamic and put pressure on the unusually low interest rates currently in place around the world, for example. This would lead to a material decline in real estate values that have underpinned the growth in global net worth for the past two decades. At current loan-to-value ratios, lower asset values would mean that a high share of household and corporate debt will exceed the value of underlying assets, threatening the repayment capacity of borrowers and straining financial systems. We estimate that net worth relative to GDP could decline by as much as one-third if the relationship between wealth and income returned to its average during the three decades prior to 2000. … Not only is the sustainability of the expanded balance sheet in question; so too is its desirability, given some of the drivers and potential consequences of the expansion. For example, is it healthy for the economy that high house prices rather than investment in productive assets are the engine of growth, and that wealth is mostly built from price increases on existing wealth?

I have no clear idea what the probability is of the negative scenario–that is, a substantial collapse of wealth holdings around the world. The figure above suggests that the effect might be a little more moderate for the US economy than for some others. But a global wealth collapse would be rugged news for the financial sector, as well as for the future financial plans of people and companies. The MGI report suggests that there may be a way to thread the needle here. If one is concerned about the possibility of a wealth collapse, one way to cushion the blow would be to focus on redirecting wealth and capital away from real estate and toward investment-type options that will tend to increase future productivity. The report notes: “[R]edirecting capital to more productive and sustainable uses seems to be the economic imperative of our time, not only to support growth and the environment but also to protect our wealth and financial systems.”