Renewable Energy and Its Need for Minerals

Solar panels and wind turbines are physical objects, and they need physical inputs. In particular, a dramatic expansion of solar and wind power will require a dramatic expansion of the production of a range of key minerals. The IMF includes a short “Special Feature” in its most recent World Economic Outlook (October 2021) on this subject: “Clean Energy Transition and Metals: Blessing or Bottleneck?” The IMF writes:

To limit global temperature increases from climate change to 1.5 degrees Celsius, countries and firms increasingly pledge to reduce carbon dioxide emissions to net zero by 2050. Reaching this goal requires a transformation of the energy system that could substantially raise the demand for metals. Low-greenhouse-gas technologies—including
renewable energy, electric vehicles, hydrogen, and carbon capture—require more metals than their fossil-fuel-based counterparts. …

Here’s a table showing a list of main “transition metals”–that is metals likely to be important in the energy transition.

Here’s a figure showing how the quantity of these metals needed is likely to rise by the 2030s. Notice that the left-hand axis is measured as a multiple of how much consumption of these metals by the 2030s will exceed consumption in the 2010s. Notice also that the first listed element, lithium, is being measured on the much higher right-hand axis.

Finally, here’s a figure showing how the production and reserves of four of these key transition metals are currently concentrated in a few countries–and the US does not appear as a major producer of any of them. Thus, an implication of the transition to current technologies of cleaner energy is US dependence on the countries shown here for key inputs, and not all these countries are both friendly and stable. The IMF report focuses on a subset of these metals: “The four representative metals chosen for in-depth analysis are copper, nickel, cobalt, and lithium. Copper and nickel are well-established metals. Cobalt and lithium are probably the most promising rising metals. In the IEA’s Net Zero by 2050 emissions scenario, total consumption of lithium and cobalt rises by a factor of more than six, driven by clean energy demand, while copper shows a twofold and nickel a fourfold increase in total consumption …”

Many of those who are most strongly in favor of a swift move to cleaner energy also have severe qualms about an increase in mining. Political conflicts thus arise. In northern Nevada, a company called Lithium Americas believes it has discovered at a location called Thacker Pass one of the world’s largest deposits of lithium. However, local protestors are pushing back hard against mining this lithium. The protesters clearly to not believe that the environmental concerns can be overcome. Indeed, one story notes: ” At the Thacker Pass camp, activists who call themselves `radical environmentalists’ hope that addressing these challenges will press nations to choose to drastically reduce car and electricity use to meet their climate goals rather than develop mineral reserves to sustain lifestyles that require more energy.”
 

I should perhaps emphasize that these kinds of extrapolations about long-run demand need to be treated with care. Such predictions are premised on current technology. If demand for these minerals spiked, and their prices spiked as well, it would presumably unleash a set of incentives for finding a way to conserve on their use, to find cheaper alternatives, to recycle from previous uses, and so on. But one can at least say that given current technology, green energy advocates face a dilemma here: to support a rapid expansion of clean-energy technologies, you need to also support substantial increases in mining operations. And when it comes to the environmental damage from such mining for transition metals, it’s worth remembering that the damage is likely to be considerably less if such operations are carried out in the United States, compared to if the expanded mining is done in some of the other countries that are main potential sources for such metals.

Opioid Overdoses: Worse Again

Deaths from overdoses, especially opioids, are getting worse. Here’s a graph from the Centers for Disease Control. Each point plots the cumulative deaths from drug overdoses in the previous 12 months. Thus, in January 2015 on the left-hand-side of the figure, there had been bout 50,000 drug overdose deaths in the previous 12 months. By April 2021, on the right-hand-side of the figure, there has been about 100,000 drug overdose deaths in the previous 12 months. The figure also shows that the problem seemed to have levelled out for awhile in 2018 and 2019, but with the pandemic in 2020 is started getting worse again.

David M. Cutler and Edward L. Glaeser offer a primer on how we got here in their article in the Fall 2021 issue of the Journal of Economic Perspectives: “When Innovation Goes Wrong: Technological Regress and the Opioid Epidemic.” (Full disclosure: I’ve been Managing Editor of this journal since the first issue in 1987. On the other side, all JEP articles have been freely accessible online for a decade now, so any personal benefit I receive from encouraging people to read them is highly indirect.)

Here’s an evolution of the problem in one graph. The blue line at the top is drug overdoses from all causes since 2020. The red dashed line shows overdoses just from opioids: the red line tracks the blue line, showing that the problem is fundamentally about opioids. The yellow dashed line shows overdoses from prescription opioids, and you can see that for about a decade after 2000, this was the main problem. Around 2010, when efforts were made to crack down on overprescribing prescription opioids, overdoses from heroin take off. Not long after that, overdoses from synthetic opioids like fentanyl and tramadol take off, and have been the main source of opioids overdoses in the last few years.

Cutler and Glaeser tell the story this way:

The opioid epidemic began with the availability of OxyContin in 1996. OxyContin was portrayed as a revolutionary wonder drug: because the painkiller was released only slowly into the body, relief would last longer and the potential for addiction would decline. From
1996 to 2011, legal opioid shipments rose six-fold. But the hoped-for benefits proved a mirage. Pain came back sooner and stronger than expected. Tolerance built up, which led to more and higher doses. Opioid use led to opioid abuse, and some took to crushing the pills and ingesting the medication all at once. A significant black market for opioids was born. Fifteen years after the opioid era began, restrictions on their use began to bind. From 2011 on, opioid prescriptions fell by one-third. Unfortunately, addiction is easier to start than stop. With reduced access to legal opioids, people turned to illegal ones, first heroin and then fentanyl, which has played a dominant role in the recent spike in opioid deaths.

How did Oxycontin get such a foothold? There’s plenty of blame to pass around. First, the government regulators who approved the drug deserve a slice of blame. The theory of oxycontin was that slow release would require less medication, and thus pose less harm. But as Cutler and Glaeser point out: “At the time of FDA approval and even after, no clinical trials backed up this theory.” Instead, the FDA relied on evidence that hospital inpatients didn’t tend to become addicted, without asking if the same would apply to outpatients. Cutler and Glaeser note:

The FDA generally requires at least two long-term studies of safety and efficacy in a particular condition before drug approval, but for OxyContin, the primary trial for approval was a two-week trial in patients with osteoarthritis. Even with this limited evidence, the FDA approved OxyContin “for the management of moderate to severe pain where use of an opioid analgesic is appropriate for more than a few days”—with no reference to any particular condition and no limit to short-term use. … Two examiners involved in OxyContin’s approval by the Food and Drug Administration went on to work for Purdue. When the FDA convened an advisory group in 2002 to examine the harms from OxyContin, eight of the ten experts had ties to pharmaceutical firms.

I’d also say that some of the doctors who overprescribed these medications deserve their share of the blame. There’s lots of evidence of how a big marking effort by Purdue encouraged doctors to prescribe oxycontin, but at the end of the day, it’s the doctors who actually did the prescribing, and some of them went far overboard. Cutler and Glaeser cite evidence that the top 5% of drug prescribers accounted for 58% of all prescriptions in Kentucky, 36 percent in Massachusetts, and 40% in California. The medical profession is well-aware that people have been getting addicted to opioids in various forms for centuries, and some greater skepticism was called for.

Roughly 700,000 Americans have dies of opioid overdoses since 1999. The isolation and stresses of the pandemic seems to have made the problem worse. It feels to me as if it’s become a cliche to refer to opioid overdoses as a “crisis,” but it’s a crisis that doesn’t seem to be receiving a crisis-level response. Cutler and Glaeser go into some detail on demand-side and supply-side determinants of the crisis, but I’ll let you go to their article for details. They conclude this way:

Past US public health efforts offer both hope and despair. Nicotine is an extremely addictive substance and yet smoking rates have fallen dramatically over the past five decades, because of both regulation and fear of death. On the other side, the harms of obesity are also well-known and average weights are still increasing. We cannot predict whether opioid addiction will decline like cigarette smoking or persist like obesity.

The medical use of opioids to treat pain will always involve costs and benefits, and the optimal level of opioid prescription is unlikely to be zero. The mistake that doctors and prescribers made in recent decades was to assume overoptimistically that a time release system would render opioids non-addictive. Thousands of years of experience with the fruits of the poppy should have taught that opioids have never been safe and probably never will be.

The larger message of the opioid epidemic is that technological innovation can go badly wrong when consumers, professionals, and regulators underestimate the downsides of new innovations and firms take advantage of this error. Typically, consumers can experiment with a new product and reject the duds, but with addiction, experimentation can have permanent consequences.

Here are some of my previous posts on what I will keep calling the opioid “crisis:”

Why Has Global Wealth Grown So Quickly?

The amount of wealth in an economy should be related to the amount of income. For example, wealth in real estate will be linked to the income that people have available to pay for housing. Wealth in the form of corporate stock should be linked to the profits of companies. From the 1970s up through the 1990s, for the global economy as a whole, total wealth was a multiple of about 4.2 times GDP. But for the last couple of decades, the ratio of wealth/GDP had been rising, and is now a multiple of about 6.1 times GDP. The McKinsey Global Institute lays out some facts and offers some possible interpretations in “The rise and rise of the global balance sheet: How productively are we using our wealth?” (November 15, 2021).

The report focuses on ten countries that make up about 60% of world GDP: Australia, Canada, China, France, Germany, Japan, Mexico, Sweden, the United Kingdom, and the United States. For each country, the wealth has three main components: “real assets and net worth; financial assets and liabilities held by households, governments, and nonfinancial corporations; and financial assets and liabilities held by financial corporations.”Here’s the pattern of wealth/GDP over time for those countries:

It’s interesting to note that the United States is not leading the way here in growth of national wealth. In the more detailed discussion, while it’s true that wealth in the form of real estate and corporate stock values has been rising in the US, it’s also true that foreign ownership of US wealth has been rising faster than US ownership of foreign wealth, which has kept the overall US ratio relatively unchanged.

The MGI report describes the overall dynamics this way:

Thus, the report notes: A central finding from this analysis is that, at the level of the global economy, the historical link between the growth of wealth, or net worth, and the value of economic flows such as GDP no longer holds. Economic growth has been sluggish over the past two decades in advanced economies, but net worth, which long tracked GDP growth, has soared in relation to it. This divergence has emerged as asset prices rose sharply—and are now almost 50 percent higher than the long-run average relative to income. The increase was not a result of 21st-century trends such as the increasing digitization of the economy. Rather, in an economy increasingly propelled by intangible assets, a glut of savings has struggled to find investments offering sufficient economic returns and lasting value to investors. These (ex-ante) savings have instead found their way into a traditional asset class, real estate, or into corporate share buybacks, driving up asset prices.

One possible explanation for this growth in wealth would be if these major world economies were going through a major investment boom, in which case the additional wealth might be a natural reflection of the much greater productive capacity. But that doesn’t seem to be the main story. Instead, most of the gain in wealth and most of the world’s wealth instead exists in the form of in real estate. The MGI report puts it this way:

Two-thirds of global net worth is stored in real estate and only about 20 percent in other fixed assets, raising questions about whether societies store their wealth productively. The value of residential real estate amounted to almost half of global net worth in 2020, while corporate and government buildings and land accounted for an additional 20 percent. Assets that drive much of economic growth—infrastructure, industrial structures, machinery and equipment, intangibles—as well as inventories and mineral reserves make up the rest. Except in China and Japan, non-real estate assets made up a lower share of total real assets than in 2000. Despite the rise of digitization, intangibles are just 4 percent of net worth: they typically lose value to competition and commoditization, with notable exceptions. Our analysis does not address nonmarket stores of value
such as human or natural capital.

What possible scenarios could emerge from this shift in the wealth/GDP ratio? There’s basically a happy interpretation and a not-so-happy one. The MGI report puts it this way:

In the first view, an economic paradigm shift has occurred that makes our societies wealthier than in the past relative to GDP. In this view, several global trends including aging populations, a high propensity to save among those at the upper end of the income spectrum, and the shift to greater investment in intangibles that lose their private value rapidly are potential game changers that affect the savings-investment balance. These together could lead to sustainably lower interest rates and stable expectations for the future, thereby supporting higher valuations than in the past. While there was no clear discernible upward trend of net worth relative to GDP at global level prior to 2000, cross-country variation was always large, suggesting that substantially different levels are possible. High equity valuations, specifically, could be justified by attributing more value to intangible assets, for instance, if corporations can capture the value of their intangibles investments more enduringly than the depreciation rates that economists assume. …

In the opposing view, this long period of divergence might be ending, and high asset prices could eventually revert to their long-term relationship relative to GDP, as they have in the past. Increased investment in the postpandemic recovery, in the digital economy, or in sustainability might alter the savings-investment dynamic and put pressure on the unusually low interest rates currently in place around the world, for example. This would lead to a material decline in real estate values that have underpinned the growth in global net worth for the past two decades. At current loan-to-value ratios, lower asset values would mean that a high share of household and corporate debt will exceed the value of underlying assets, threatening the repayment capacity of borrowers and straining financial systems. We estimate that net worth relative to GDP could decline by as much as one-third if the relationship between wealth and income returned to its average during the three decades prior to 2000. … Not only is the sustainability of the expanded balance sheet in question; so too is its desirability, given some of the drivers and potential consequences of the expansion. For example, is it healthy for the economy that high house prices rather than investment in productive assets are the engine of growth, and that wealth is mostly built from price increases on existing wealth?

I have no clear idea what the probability is of the negative scenario–that is, a substantial collapse of wealth holdings around the world. The figure above suggests that the effect might be a little more moderate for the US economy than for some others. But a global wealth collapse would be rugged news for the financial sector, as well as for the future financial plans of people and companies. The MGI report suggests that there may be a way to thread the needle here. If one is concerned about the possibility of a wealth collapse, one way to cushion the blow would be to focus on redirecting wealth and capital away from real estate and toward investment-type options that will tend to increase future productivity. The report notes: “[R]edirecting capital to more productive and sustainable uses seems to be the economic imperative of our time, not only to support growth and the environment but also to protect our wealth and financial systems.”

The Bottleneck Story

Economies around the world are afflicted by supply chain bottlenecks. Daniel Rees and Phurichai Rungcharoenkitkul dig into the topic in “Bottlenecks: causes and macroeconomic implications” (Bank of International Settlements, BIS Bulletin #48, November 11, 2021).

The authors point out several factors behind the bottlenecks. One, of course, is disruptions to work and schedules caused by the COVID pandemic.

Pandemic-induced supply disruptions have clearly been a major cause of bottlenecks, especially in the early stages of the global recovery. Producers who had severed relationships with suppliers early in the pandemic found it hard to re-establish them when demand picked up. Asynchronous lockdowns disrupted shipping, while sporadic virus outbreaks led to further dislocations. But there are also other causes. Unexpected natural events have intensified supply pressures. A lack of investment in the years leading up to the pandemic left some industries with little spare capacity. The investment shortfall was particularly severe for oil and resource commodities, due in part to the transition away from fossil fuel energy.

But the pandemic has now been with us since February 2020, more than 18 months ago. Why are the bottlenecks becoming so salient now? The authors write (footnotes and references to graphs omitted:

Several factors have amplified the economic severity of bottlenecks. One is the shift in the composition of demand towards manufactured goods during the Covid recession and recovery. These goods are heavily reliant on inputs from other industries, leading to larger demand spillovers than from a services-led recovery. Manufactured goods (and their inputs) also tend to be relatively capital-intensive, making their short-run supply elasticity low as it takes time to expand productive capacity. As a result, sudden increases in manufactured goods demand can translate quickly into bottlenecks, leading to higher inflation.

As I have noted in an earlier post, the patterns of consumption of goods and services have been unusual in the pandemic recession. Usually in a recession, consumption of goods drops (as people put off purchasing new items) but consumption of services remains about the same. In the pandemic recession, consumption of many services was sharply constrained, with a combination of public lockdowns and many people who stepped back from public spaces on their own. However, the federal government at the same time was paying out substantial support packages: checks to families, higher unemployment payments, payroll protection to small businesses, and so on. Overall, demand in the economy remained reasonably high (the actual recession was only two months long), but with many services shut down or unattractive, the money went into buying goods instead. The BIS report continues:

A second factor is behavioural change on the part of supply chain participants. Anticipation of product shortages and precautionary hoarding at different stages of supply chain have aggravated initial shortages (the “bullwhip effect”), leading to further incentives to build buffers. These behavioural changes have the potential to lead to feedback effects that exacerbate bottlenecks.

In short, shortages lead to hoarding, and hoarding makes the shortages worse. It’s like the Great Toilet Paper Shortage of 2020 all over again, but this time at a more widespread and macroeconomic level. More from BIS:

A third important background element is the lean structure of supply chains, which have prioritised efficiency over resilience in recent decades. These intricate networks of production and logistics were a virtue in normal times, but have become a shock propagator during the pandemic. Once dislocations emerged, the complexity of supply chains made them hard to repair, leading to persistent mismatches between demand and supply.

Here are a few figures to illustrate the patterns. The first set of figures shows rising shipping costs, rising delivery times, and drops in inventories of goods.

But the BIS also points out that while these bottlenecks are very real, the amount of goods being actually moved has been rising and is at high levels. Thus, the bottlenecks have arisen because demand is trying to pull so many goods and services through the pre-existing and disrupted supply chains. The figures show levels of shipping of some key raw materials, semiconductors, and overall container volumes.

In the short run, a combination of high demand and bottlenecks will push up prices and cause inflation. The BIS economists estimate that if you take out price increases for bottleneck-affected items like energy and motor vehicles, the US inflation rate would have be 2.8 percentage points lower. The question of whether this inflation will last remains unsettled. On one side, if the inflationary pressures become incorporated into an expectation by firms that they need to keep raising prices substantially and an expectation by workers that they need to see wages keep rising substantially, then inflation can become self-sustaining. In the decades since World War II, this kind of inflation has been ended by the Federal Reserve raising interest rates and causing a recession.

On the other side, there is some possibility for the bottlenecks to resolve themselves. The BIS report notes:

Nevertheless, persistent bottlenecks could also prompt corrective behavioural changes over time, eg by providing incentives for investment to expand capacity. Once bottlenecks begin to ease, the feedback loops could operate as a virtuous circle to mitigate the bullwhip effects. In this way, just as bottlenecks have persisted longer than initially expected, their resolution could also follow more swiftly than currently feared.

The Shifting Center of US Population: 1790-2020

After each decennial Census, the US Census Bureau calculates the “center of population” for the United States–that is, if we averaged the geographic location of all US citizens, what would the average location be? As you would expect, the average back in 1790, when the country had only 13 states along the eastern edge of North America, the population center was on the east coast. But as more states joined the country and people moves to the west and south, the population center has shifted.

Here’s a more detailed map from the Census Bureau, with some relevant population local events from US history duly noted.

A Primer on NFTs

NFT stands for “non-fungible token.” There is real money involved here: hundreds of millions of of NFTs are bought and sold each week. Back in March, an NFT for a work of art called “Everydays: The First 5000 Days” sold through the Christie’s auction house in New York for $69 million. Notice that the buyer did not receive a canvas-and-paint piece of art. Nor did the buyer get exclusive ownership over the digital image: you can go to the image at the link above, copy it, and put it on your own computer. So what is this “token” that was purchased and what does the “non-fungible” part of NFT mean? Steve Kaczynski and Scott Duke Kominers offer a primer in “How NFTs Create Value” (Harvard Business Review, November 10, 2021). They write:

As the name “non-fungible token” suggests, each NFT is a unique, one-of-a-kind digital item. They’re stored on public-facing digital ledgers called blockchains, which means it’s possible to prove who owns a given NFT at any moment in time and trace the history of prior ownership. Moreover, it’s easy to transfer NFTs from one person to another — just as a bank might move money across accounts — and it’s very hard to counterfeit them. Because NFT ownership is easy to certify and transfer, we can use them to create markets in a variety of different goods.

But NFTs don’t just provide a kind of digital “deed.” Because blockchains are programmable, it’s possible to endow NFTs with features that enable them to expand their purpose over time, or even to provide direct utility to their holders. In other words, NFTs can do things — or let their owners do things — in both digital spaces and the physical world.

In this sense, NFTs can function like membership cards or tickets, providing access to events, exclusive merchandise, and special discounts — as well as serving as digital keys to online spaces where holders can engage with each other. Moreover, because the blockchain is public, it’s even possible to send additional products directly to anyone who owns a given token. All of this gives NFT holders value over and above simple ownership — and provides creators with a vector to build a highly engaged community around their brands.

It’s not uncommon to see creators organize in-person meetups for their NFT holders, as many did at the recent NFT NYC conference. In other cases, having a specific NFT in your online wallet might be necessary in order to gain access to an online game, chat room, or merchandise store. … Thus owning an NFT effectively makes you an investor, a member of a club, a brand shareholder, and a participant in a loyalty program all at once. At the same time, NFTs’ programmability supports new business and profit models — for example, NFTs have enabled a new type of royalty contract, whereby each time a work is resold, a share of the transaction goes back to the original creator. …

NFTs enable new markets by allowing people to create and build upon new forms of ownership. These projects succeed by leveraging a core dynamic of crypto: A token’s worth comes from users’ shared agreement — and this means that the community one builds around NFTs quite literally creates those NFTs’ underlying value. And the more these communities increase engagement and become part of people’s personal identities, the more that value is reinforced. Newer applications will take greater advantage of online-offline connections, and introduce increasingly complex token designs. 

I confess that I’m not at all sure how membership in an online community made it worthwhile for someone to spend $69 million for a nonfungible token on “Everydays: The First 5000 Days.” But a lot of money has been made by getting people to join up with certain online communities, through social media, gaming, and direct purchases. NFTs seem to offer new possibilities for organizing such groups.

Women in Economics: Another Part of the Story

Even as other academic fields have made considerable progress to equality in the numbers of female and male professors, economics has lagged behind. For an overview, a useful starting point is the three-paper symposium in the Winter 2019 issue Journal of Economic Perspectives:

A common approach in this literature is to think about the “pipeline”–that is, what share of economics majors are women, what share of PhD students in economics are women, and then what share of assistant, associate, and full professors of economics are women. Donna Ginther has written “Gender, Race, and Academic Career Outcomes — Does Economics Mirror Other Disciplines?” which provides an overview of research in the area in the October 2021 NBER Reporter. Here’s a figure that focuses on one stage of the pipeline: promotion from assistant to associate professor.

2021number3_gender1.jpg

Ginther writes (footnotes omitted):

With my long-time collaborator Shulamit Kahn, who has played a key role in this work, I have examined gender differences in career outcomes for economists and for other academic fields. We found that after controlling for research publications, women were significantly less likely to be promoted to tenure in economics. Our most recent study used Academic Analytics data to update the analysis of the economics profession compared with other science and social science fields. Figure 1 shows survival curves by gender and compares economics to the fields of mathematics and statistics, political science, biomedical science, physical science, and engineering. The only significant gender difference in promotion to associate professor is in economics, where women were 15 percent less likely to be promoted after controlling for publications, citations, and research grants.

The figure draws on research published earlier by Donna Ginther and Shulamit Kahn, “Women in Academic Economics: Have We Made Progress?” (AEA Papers and Proceedings, May 2021, pp. 138-142; also available as NBER Working Paper 28743, April 2021).

Some of the previous studies were limited to the best research universities. We therefore separately estimated the hazard analysis for two samples: those who entered academia into very high research activity institutions and those who did not. … The majority of the observations were in the very high research activity universities (which is primarily informative about the clients interested in Academic Analytics services). We were frankly stunned by the results. The gender tenure gap was small and insignificant in very high research activity institutions. However, in less research-intensive universities, it was huge, with women’s rate of receiving tenure (with all controls) 46 percent lower than men’s (p = 0.055). … [T]he huge point estimate of the tenure penalty at these less research-based universities and colleges is remarkable.

Ginther and Kahn frankly admit that they do not have an obvious explanation for why the gender tenure gap in economics is so much larger at less research-intensive universities. But it seems a topic worth exploring further.

I’ll just add that my own sense is that the issues with attracting women to careers in economics may start early. Since about 2005, women have been about 30-35% of the economics majors, the economics PhD students, and the new assistant professors. While I’m sure that some useful steps might be taken to bolster the pipeline to becoming a tenured professor at these stages, it will be challenging to get to gender equity in the tenured professoriate if only one-third of undergraduate economics majors are women. A few years ago, when I looked at at who takes AP microeconomics and macroeconomics exams, I found that the number of males who get a 4 or 5 on these examples was much higher than the number of females. Thus, it seems plausible that even before college students reach campus, there are many more men than women who considering a college major in economics.

State-Level Health and Income: A Puzzle

If you look at the US states by income level and mortality rates, you find a reasonably strong negative correlation: that is, states with higher income tend to have lower mortality. But here’s the puzzle. If you go back a few decades to 1980 or 1968 and look at the same pattern, you find only a much weaker correlation: that is, much less tendency for states with higher income to have lower mortality. Why?

Benjamin K. Couillard, Christopher L. Foote, Kavish Gandhi, Ellen Meara, and Jonathan Skinner tackle this question among others in “Rising Geographic Disparities in US Mortality” (Journal of Economic Perspectives, Fall 2021, 35:4, pp. 123-46). (Full disclosure: I’m the Managing Editor of this journal, and have been since 1986, so I am perhaps predisposed to find the articles of interest.) Here’s a figure from their paper. The points represent US states. As you can see, if one looks at the data for 1968 or 1980 (the top two panels), the negative correlation between income and mortality rates is relatively weak. However, if one looks at the same correlation for 2019 (bottom left panel), the negative correlation is much stronger.

The bottom right correlation is usefully thought-provoking. It looks at the correlation between state-level income in 1968 and state-level mortality in 2019–which looks a lot like the figure using only 2019 data. To put this another way, state-level income data from 1968 has less correlation with mortality rates in 1968 than it does with mortality rates in 2019. This seems peculiar.

What seems to be happening inside this data is that the ranking of states by per-capita income hasn’t changed all that much since 1968 or 1980: states that tended to be better-off or worse-off then are still better-off or worse-off now. However, mortality rates across states have changed substantially in the last few decades, and in a way that causes the mortality rates to line up more with state-level measures of income. As the authors write: “Instead, mortality changes have been most favorable in those states that have tended to have high relative levels of income over the past three decades.”

Why might health have improved more in the higher-income states? One possible explanation is related to what are sometimes called “deaths of despair,” which are deaths due to drug overdoses, alcohol poisoning, and suicide. The difficulty with this explanation is that there aren’t enough of these deaths to drive the changes noted above. When one looks at major causes of death, it turns out that deaths from malignant neoplasms, diseases of the heart, cerebrovascular diseases, and lower respiratory diseases all have a greater (negative) correlation with income at the state-level than they did a few decades ago.

Couillard, Foote, Gandhi, Meara, and Skinner suggest an alternative explanation rooted in what they call a “portmanteau of state-level factors.” This has two parts. One part is that if one goes back a half-century and more, one can make a case that living in lower-income states had some health benefits. Studies from the 1930s suggest that when people migrated from lower-income rural states to higher-income urban areas, their health often got worse as they were exposed to the big-city evils of alcohol, tobacco, and pollution. One quirky fact comes from the state of “Kansas, which imposed prohibition in 1880, not ending it until 1948. Perhaps not coincidentally, in 1959, Kansas was tied in first place for the state with the highest life expectancy.”

The other part of the explanation is that in the late 1980s and early 1990s, high-income states were much more likely to enact a portfolio of health-related policies including higher taxes on alcohol and tobacco and expansions of Medicaid coverage focused on pregnant women. These specific policies were part of a broader emphasis on public health policies in the higher-income states that manifested itself in different ways in different places. A few decades later, the states that enacted such policies are seeing the payoff in improved mortality rates.

States are sometimes called the “laboratories of democracy.” One reason for having the federal government set some minimum standards, and then letting states experiment, is to get some evidence on what works and what doesn’t. In that spirit, the authors write: “Although states with high income have shown the way, states with lower income capacity are not inexorably constrained to rates of midlife mortality that rank among the worst in developed countries.”

When Residential Real Estate Turned Commercial: Working from Home

Everyone knows that lots of people have ended up working from home, either part-time or full-time, since the start of the pandemic. But I’m not sure many of us have appreciated how extraordinary that shift has been. In effect, an enormous amount of what economists would classify as “residential capital” was converted to commercial real estate almost overnight: that is, people used their places of residence along with capital that had often been installed at their place of residence mostly for other purposes (like entertainment) to do their work.

The size of the shift is remarkable. Janice C. Eberly, Jonathan Haskel and Paul Mizen discuss “Potential Capital: Working From Home, and Economic Resilience” (NBER Working Paper 29431, October 2021, subscription needed). They compare the drop in economic output from the workplace in the first two quarters of 2020 to the overall drop in economic output: in the US economy, for example, they find that output in the workplace fell by about 17%, but total economic output actually fell about 9%. Work done outside the conventional workplace made up the difference.

This built-in resilience of the economy may now seem pretty obvious, but it wasn’t obvious (at least to me) before the pandemic hit. The magnitudes here are enormous. According the US Bureau of Economic Analysis, the value of residential real estate in 2020 was almost $25 trillion. Privately owned nonresidential structures were worth almost $16 trillion, while the equipment in those structures was another $7 trillion. In short, trillions of dollars of residential capital replaced trillions of dollars of nonresidential capital in a very short time. The transition was far from seamless or painless, of course, but the fact that it happened at all is worth a gasp.

The Eberly, Haskel and Mizen paper is academic research, so if you are not already initiated into the world of growth accounting frameworks and decompositions, it won’t be an easy read. But their broad arguments about the “potential capital” embedded in residences and the resilience of economies that can use that capital make me think about the shifting nature of work and jobs.

The nature of work for most people, at least going back to the industrial revolution, is that (as the authors write) “the point of going to the workplace is that workers have capital with which to work.” After all, that’s why manufacturing workers headed off to assembly lines. It’s also why so many people in service industries like finance and real estate needed to go to the office: it’s where the capital was to carry out their jobs. But now, when we talk about why it might be important or useful for workers to return to in-person work, it’s become clear that for many modern jobs in the services-and-information economy, going to the workplace to use the capital at the workplace is less important, or even not important at all. Instead, for a number of workers, the arguments for going to the workplace are more about communication between workers, training new workers, monitoring how much work is getting done, and the ways in which in-person communication may solve problems or spur innovation.

An obvious question is whether the shift to work from home will continue. It (now) seems obvious that working from home will remain higher after the pandemic than before, but how much higher seems uncertain to me. After all, we are only a very short time into this social and economic experiment. One can imagine a future evolution of the economy in which firms with mostly at-home workers compete with firms that have mostly in-the-office workers. It may turn out that the two types of firms have different strengths and weaknesses. For example, one possibility is that in-the-office firms are better at hiring, training, and certain kinds of innovation, but at-home firms offer greater flexibility and specialized skills. It may also be that workers have varying preferences over the context in which they would like to work. Young adult singles might like an in-person work experience, while middle-aged marrieds might see greater advantages in working from home. During the pandemic, many workers were able to do their basic tasks from home, but just because a working arrangement functions OK as a forced expedient under the stress of a pandemic doesn’t mean it’s also a long-run answer.

Dementia: A Public Policy Challenge

As the US population ages, the number of people with dementia keeps rising. It’s a problem from hell and a huge social challenge. A committee convened by the National Academy of Sciences offers an overview in Reducing the Impact of Dementia in America: A Decadal Survey of the Behavioral and Social Sciences (September 2021, available with free registration). The committee writes (references and citations omitted): “More than 6 million people in the United States are currently living with Alzheimer’s disease, a number that will rise to nearly 14 million by 2060 if current demographic trends continue. It is estimated that approximately one-third of older Americans have Alzheimer’s or another dementia at death …”

Here, I’ll focus on the economic side of the issues and set aside the direct costs and reduced quality of life for the person with dementia, although the personal side affects my own extended family, along with so many others. The core of the economic problem is that people with dementia need care. The NAS report notes (again, citations omitted):

The primary economic costs of dementia to persons living with dementia and their families are (1) medical and long-term care costs, and (2) the value of unpaid caregiving provided by family (most commonly) and friends. Most estimates of these costs in the literature draw on such nationally representative data sources as the Health and Retirement Study, the Medicare Current Beneficiary Survey, and Medicare claims data. An estimate of annual per-person costs for 2019, which includes health care and the value of unpaid care provided to persons with Alzheimer’s disease, is approximately $81,000 ($31,000 is the value of the unpaid care). This estimate is about four times higher than the costs of the same care provided to similarly aged persons without
the disease. …

Residential care is very expensive. Estimates of the typical costs of long-term care range from $52,624 per year for a home health aide to $90,000 for a semiprivate room in a nursing home and up to $102,000 for a private room. Medicaid, which covers long-term care for low-income individuals and those who become poor as a result of paying for health care and long-term care, is the largest public payer for long-term care, covering 62 percent of nursing home residents, and one-quarter of adults with dementia who live in the community are covered by Medicaid over the course of a year.

When aggregated to the U.S. population, the costs are estimated to have exceeded $500 billion in 2019 and are projected to increase to about $1.5 trillion by 2050. Unaccounted for in these estimates are other economic costs, such as the impact on caregivers’ wages and future employability; when included, these costs increase estimates of unpaid caregiver costs by as much as 20 percent . Moreover, these costs may be underestimated because the physical and mental strain associated with unpaid caregiving likely translates to other costs, such as for caregivers’ own health care. … Other costs unaccounted for include financial harm to persons living with dementia and their families. Cognitive impairment may lead to financial decision-making errors, including payment delinquency and susceptibility to financial exploitation, starting years before diagnosis. Financial harm to individuals living with dementia may also have long-term implications for the surviving spouse.

What might be done? One can try to think about ways of providing the needed services less expensively, but without compromising quality. One can think about steps that might reduce the incidence of dementia. One can hope for a cure. All of these seem worth trying; none at present seems especially promising.

The idea of less expensive and higher quality care is of course enticing, and perhaps it can be delivered by some combination of facilities designed for dementia patients, which would try to free up the time of human staff to provide care by handing off other tasks like cleaning and cooking to lower-cost automation. But I’m not aware of any big success stories along these lines.

There is strong evidence that being in better health overall reduces one’s chance of dementia. As the report notes: “For example, robust evidence suggests that people who take such common-sense measures as eating a healthy diet, exercising regularly, maintaining a healthy weight, and reducing cardiovascular risk have a lower risk of dementia.” Of course, a step-increase in healthy behaviors would have many other benefits as well, but I’m unaware of any big success stories that would dramatically improve health in this way beyond current levels.

Will technology ride to the rescue? Maybe. The FDA has just approved aducanumab, the first drug for treating Alzheimer’s disease. With wider use, we’ll see how well it works, and perhaps develop something beter. But new technologies come at a cost, too. The NAS report describes the issue this way:

First, more than 130 innovative treatments for Alzheimer’s disease and related dementias are being investigated in clinical trials, and some may turn out to slow or halt disease progression and reduce costs. A simulation study found that a hypothetical treatment innovation that delayed the onset of Alzheimer’s disease by 5 years would reduce the population with the disease by 41 percent in 2050, which would reduce annual costs by $640 billion. However, novel treatments, which would likely have high prices, could exacerbate the overall economic impact of the disease. …

The recent approval by the U.S. Food and Drug Administration (FDA) of the first new drug in decades that is intended to treat Alzheimer’s disease, aducanumab, is likely to have substantial impact on the cost picture. … The manufacturer of aducanumab initially estimated that 1 to 2 million persons would currently be eligible to receive the medication, although that number may change depending on eligibility guidelines. Using the manufacturer’s estimated cost of $56,000 per patient per year, the total cost just for the drug could range from $56 billion to as much as $112 billion. Whatever number of people ultimately receive the drug, such estimates do not include the costs of infusion, monitoring and treating adverse effects, and additional pre-administration testing. The magnitude of ancillary costs is not yet established, but observers have suggested that they could add tens of thousands in costs per eligible patient. To put the cost of the drug alone into perspective, the total 2021 National Institutes of Health budget is $43 billion and the total 2021 Medicare budget is $688 billion.

It’s past time for an Operation Warp Speed aimed at dementia, which would guarantee that the government would purchase a certain quantity of the drug in exchange for meeting certain health and cost-per-patient targets. But barring salvation via technology, the question of how society will treat its dementia patients–especially those who do not have family caregivers or financial resources–is looming over our health care policy debates.