Physical Realities of Moving to Cleaner Energy

I sometimes become exasperated when progress toward clean energy is measured by, say, the amount of a government tax credit for buying electric vehicles, or the the announced dates for when a state or city will reach zero carbon emissions, or whether an international agreement is signed or not. My annoyance is that while policy changes or timelines or signatures can be useful in spurring change, reaching clean energy goals requires actual physical changes in generation, storage, transportation, and usage of energy. The extent to which the needed physical changes are happening or not is literally the entire issue.

In this spirit, the McKinsey Global Institute has published a report called “The hard stuff: Navigating the physical realities of the energy transition” (August 14, 2024 ). The authors write:

This energy transition is in its early stages. Thus far, deployment of low-emissions technologies is only at about 10 percent of the levels required by 2050 in most areas, and that has been in comparatively easy use cases. More demanding challenges are bound to emerge as the world confronts more difficult use cases across geographies. Complicating the task of building a new low-emissions energy system is that it coincides with the need for it to continue to grow to expand access to energy for billions of people who still do not have it, thereby economically empowering them. This transition also needs to address rising concerns about energy affordability and security as well as the role of the energy system in ensuring industrial competitiveness. Moreover, the aspiration is for a rapid energy transition.

The report focuses on 25 physical tasks that need to happen across seven areas–some of them with technologies that are far from mature. Here’s a graphic to illustrate. The light blue hexagons still require technological progress, but face the lowest physical hurdles. The medium blue hexagons require both acceleration of known technologies and expanded infrastructure. The dark blue hexagons (12 of the 25 categories) are where “the transformation is just beginning.”

The report goes into these 25 categories in some detail. Here are some high points from the seven main categories:

Power. Overall, low-emissions power generation capacity would have to increase about ten times by 2050. There are two Level 3 challenges: managing variability in the power system as solar and wind generate a greater share of power, and doing so in emerging power systems that need to grow particularly rapidly. The flexible capacity that would be required to manage this variability, including backup generation, storage, and interconnections of grids in different regions, would need to grow two to seven times faster than power demand, but all face barriers. …

Mobility. The number of EVs would need to surge from about 30 million on the road today to about one billion by 2050. … Scaling EV charging infrastructure and supply chains has further to go and is Level 2. Trucking, aviation, and shipping are harder to decarbonize, given that they require traveling long distances with heavy payloads, and are Level 3 challenges.

Industry. Decarbonization of the “big four” industrial material pillars of modern civilization—steel, cement, plastics, and ammonia—poses four Level 3 challenges, where the transformation is just beginning. …

—Buildings. Heating accounts for the largest share of buildings-related emissions. Heat pumps are already established technologies and perform well, but … [m]ore demanding, and therefore Level 2, is managing a potential doubling or tripling in peak power demand in some regions if heat pump use expands.

—Raw materials. Demand for critical minerals, like lithium, cobalt, and rare earths, is expected to surge, but current supply is only about 10 to 35 percent of what would be needed by 2050. …

—Hydrogen and other energy carriers. New energy carriers would be needed to serve as alternative fuels and feedstocks for industrial processes. One option is hydrogen, which faces two Level 3 challenges. First, the hydrogen molecule goes through many steps and therefore energy losses before it can be used; these would need to be minimized and weighed against its advantageous properties. Second, hydrogen production and infrastructure would need to expand hugely. Few large-scale low-emissions hydrogen projects are currently operational. …

—Carbon and energy reduction. Alongside measures to substitute high-emissions technologies for low-emissions ones, reducing the amount of energy consumed and the emissions of current technologies would also be needed. … Carbon capture from new “point sources” such as cement could be three times harder—and costlier—than for less demanding current use cases, and removing carbon from the atmosphere through direct air capture could be even more costly.

The first international treaty to address greenhouse gas emissions was signed at Kyoto 27 years ago, back in 1997. Frankly, as I think about the challenges listed in the McKinsey report and the current state of world energy production as a whole, the rough estimate that the 10% of easy progress has been made seems too optimistic. To put it another way, the announcements of timelines and treaties about zero-carbon or low-carbon energy, or about government spending and subsidies toward that goal, looks wildly optimistic compared to the amount of actual physical change that has happened. The decisions made back in the 1980s for countries other than France (!) to steer away from zero-carbon nuclear power, and even more the decision to back off from innovating that area, are now coming home to roost.

Simon Newcomb on Public Opinions and Economic Insights

Simon Newcomb (1835-1909) is largely unknown today, but he was a highly prominent economist back in the day: as one example, he was active in the disputes that led to the founding of the American Economic Association back in 1885. In July 1893, he published an essay on “The Problem of Economic Education” for the prominent (both then and now!) Quarterly Journal of Economics. The essay argues that there are basic insights of economics–well-known as of 1893–that are largely unknown or ignored by the general public. What I found thought-provoking was that a number of these insights appear equally unknown to much of the public, as well as to many policymakers, here in the third decade of the 21st century.

As part of setting the stage, Newcomb writes:

The disagreement in question is not between different classes of economic students or different schools of thought, but between well-established economic conclusions on the one hand and the ideas of the public on the other. … What I first propose to show is that we have to deal with ideas centuries old, on which the thought of professional economists has never made any permanent impression except, perhaps, in Great Britain, and that in the everyday applications of purely economic theory our public thought, our legislation, and even our popular economic nomenclature are what they would have been if Smith, Ricardo, and Mill had never lived, and if such a term as political economy had never been known. … Great changes in public sentiment do not occur suddenly, and economists must expect many years of hard work before the doctrines which they oppose are wholly rejected.

What kinds of claims does Newcomb have in mind? He points out that, then and now, many people view it as impossible for trade to benefit economies of both countries and that, instead, international trade must necessarily involve one country taking advantage of the other. In my own experience, this view is often framed essentially as “exports are good, imports are bad.” Here’s Newcomb:

Before such a thing as economic science was known arose the theory of the “balance of trade.” The fundamental doctrine of this theory was that trade was advantageous or disadvantageous to a nation according as the value of its exports exceeded or fell short of the value of its imports. Accordingly, in the nomenclature of the time, an unfavorable balance of trade or state of credit meant one in which the imports were supposed to exceed the exports, and a favorable balance the contrary. An immediate corollary from this view was that trade between two nations could not be advantageous to both, because the values which each exported to the other could not both be greater than those received from the other. …

For a century and a half the doctrine entertained and taught by economists is that there can be no trade between two nations which is not advantageous to both; that men do not buy or sell unless what they receive is to them more valuable than what they give in exchange; and that what is true of the individual man is, in this respect, true of the nation. And yet the combined arguments of economists for a hundred years have not sufficed to change the nomenclature or modify the ideas of commercial nations upon the subject. … The terms ” favorable ” and ” unfavorable,” as applied to the supposed balance of trade, still mean what they did before Adam Smith was born. We might well tremble for the political fate of any statesman who should publicly maintain that our exports would, in the long run, substantially balance our imports, no matter what policy we adopted; and that, if this equality could be disturbed, the advantage would be on the side of the nation which imported the greater values.

What about alternative perspectives on “job creation”? Newcomb writes:

The divergence between the economist and the public is by no means confined to foreign trade. We find a direct antagonism between them on nearly every ques- tion involving the employment of labor and the relation of industry to the welfare of the community. The idea that the utility and importance of an industry are to be measured by the employment which it gives to labor is so deeply rooted in human nature that economists can scarcely claim to have taken the first step towards its eradication. From the economic point of view, the value of an industry is measured by the utility and cheapness of its product. From the popular point of view, utility is nearly lost sight of, and cheapness is apt to be con- sidered as much an evil on one side as it is a good on the other. The benefit is supposed to be measured by the number of laborers and the sum total of wages which can be gained by pursuing the industry

Or here’s Newcomb on a related idea, that producing at lower prices is a good thing:

A few years ago, during the Congressional debate upon the proposed tax on artificial butter, it was claimed on one side that, if the free manufacture of this article were permitted, there was every prospect that within a few years butter would cost only ten cents a pound. … [I]t was put forth as an argument against permitting the manufacture. The most curious feature of the debate, and the one which has led me to cite it in this connection, is that there seems to have been no one present bold enough to join issue on the conclusion, and to claim that, if there was a prospect that the community at large would soon be able to obtain butter at ten cents a pound, it would be a good thing for us all. And yet there is no proposition on which we find a more general agreement among those who professionally study the subject than that modern economic progress consists very largely in cheapening processes, and that whatever evil may arise from cheapened production is only a transient one, which is compensated many-fold by placing an increased supply of the necessaries of life within reach of the masses.

What about price controls? As Newcomb writes, certain regulations in extreme cases can make sense. But attempts to help the poor by controlling prices often result in the poor receiving less of the item in question.

The wide prevalence of usury laws among us affords another example of the persistence of the ideas of a former age long after they have been shown fallacious, not only by thoughtful investigation, but by general business experience. It is not necessary that we should condemn every attempt to regulate the rate of interest, any more than to regulate other contracts in exceptional cases. The point to which we ask attention is the general belief throughout the community that the rate of interest can practically be regulated by law. Not dissimilar from this is the wide general belief that laws making it difficult to collect rents and enforce the payment of debts are for the benefit of the poorer classes. They are undoubtedly for the benefit of those classes who do not expect to pay. But the fact, so obvious to the business economist, that everything gained in this way comes out of the pockets of the poor … is something which the law-making public have not yet apprehended.

Newcomb argues that economic thinking is often dismissed as the “dismal science” not because it is incorrect, but because people and politicians would very much prefer not to think about how the economy actually works or what tradeoffs are likely to arise. He writes:

Probably no phrase ever used by Carlyle has met with wider currency than the epithet “dismal science “ which he applied to political economy. And yet a little consideration will show that political economy is dismal only in the sense that every conclusion as to what man cannot do may be called dismal. A stormy voyage across the Atlantic is very dismal; but no one from that premise ever drew the conclusion that boys ought not to learn anything about the Atlantic Ocean, or censured the meteorologist who tells us that the ocean is rough in winter, and will make the landsman seasick. That you cannot eat your cake and have it, too, is a maxim taught the school-boy from earliest infancy. But, when the economist applies the same maxim to the nation, he is met with objections and arguments, not only on the part of the thoughtless masses, but of influential and intelligent men.

As Newcomb points out, certain issues are not decided by popular vote. For example, whether a steam locomotive works, or doesn’t work, is not determined by majority rule, but by underlying realities of physical design. Economics is of course a social science, not a physical science, so the parallel is necessarily imperfect. But it remains true that the outcome of economic policies is not determined by their announced intentions of politicians or by their popularity, but by the underlying realities of how firms and consumers will react.

Newcomb argues that, in his time, economists have a tendency to shy away from arguing with the public. Who wants to get yelled at, then or now? He wrote: “It must also be conceded that we see in recent times a growing disposition among economists to abandon this particular field of conflict, with the expressed or implied admission that, after all, the wisdom of the public and the common sense of the masses may be a better guide than the theories of students and philosophers.”

But here in the US, we are living in a time when proposals to place large limitations on foreign trade are common: I cannot think of a time since Vice President Al Gore defended the North American Free Trade Agreement against the criticism of Ross Perot back in 1993 that a prominent US politician has even tried to make a pro-trade case. There are current proposals to put price limits on everything from rent to pharmaceuticals to groceries are common as well. Politicians often seem to have a hard time making a distinction between the number of jobs, which is not the major issue in a US economy where the unemployment rate has been around 4% for more last two years, and the qualities of those jobs in terms of the wages, benefits, training and opportunities that are emerging from the matches between workers and employers in the labor market. No leading politician is willing to take the idea of “you can’t eat your cake and have it too” and tackle well-known issues like the trendlines to insolvency of Social Security and Medicare, or the need to bring down budget deficits over time.

In these economic arguments and others, there is often a casually aggressive dismissal of the insights that economists have to offer, which is roughly akin to ignoring the weather report when you plan a wilderness hike or a day at the beach. As Nikita Khrushchev may have sort of once said: “Economics is a subject that does not greatly respect one’s wishes.”

The US Decouples from China, While the EU Tightens Ties

Through the Trump and Biden administrations, it has been a policy goal to reduce imports from the US to China. While US imports China have declined, EU imports from China have been on the rise. Mary E. Lovely and Jing Yan tell the story in “While the US and China decouple, the EU and China deepen trade dependencies” (Peterson Institute for International Economics, August 27, 2024).  

A common way to measure the concentration of a market, whether looking at the extent of market competition in a US industry or the extent of concentration in foreign trade, is called the Herfindahl-Hirshman index . The arithmetic is simple: divide the market into shares, square the shares, and add them up. For example, say that an industry has four firms that each have 20% of the market, and then 20 firms that each have 1% of the market. Then the HHI would be 4(202) + 20 (12) = 1620. You can do a similar calculation for a countries’ reliance on imports: that is, look at the share of imports a country is receiving from each of its trading partners, square the shares, and add them up.

(The uninitiated may wonder why the market shares should be squared. Why not, say, just add up the market share of the four biggest or the ten biggest firms? Indeed, sometimes you do see measures of the “four-firm” or “10-firm” concentration ratio calculated in this way. In say that you are comparing two industries. In both of them, the four-firm concentration ratio is 80. However, in one industry the four largest firms each have 20% of sales; in the other industry, the largest firm has 77% of sales and the next three firms each have 1%. The four-firm ratio is the same, but obviously, the second industry is much more concentrated. Squaring the shares puts additional importance on the biggest firms when measuring concentration.)

When Lovely and Yan do this calculation for the US, the EU, and China, comparing 2013, 2018, and 2023, they find:

China’s import sourcing is the most diverse of the three big regions; its concentration index is substantially lower than those of the other two, figure 1 shows. Only the European Union provides more than 10 percent of the total value of China’s imports. The European Union also relies on a diverse set of suppliers; just two sources—the United States and China—provide 10 percent or more of the European Union’s import value in 2023. Focusing on changes over time, figure 1 also shows that both China and the United States reduced their sourcing concentration between 2018 and 2023, a reversal after the growth between 2013 and 2018. In contrast, Europe’s concentration rose across the full 10-year period.

The authors find a loosely similar pattern when they focus only on imports of manufactured goods. That is, the US is less reliant on manufactured imports on China, but the share of imports from the EU, Mexico, and other countries has risen. China has seen a greater share of its manufactured imports from the US and from Taiwan, but a drop from others. In the EU, the share of manufactured imports from China is up substantially.

In short, these patterns seem to suggest that imports not coming from China to the US economy are, in a substantial way, ending up in the EU economy instead. This pattern suggest that if the goal of US trade policy is to reduce China’s footprint in the global economy, it is unlikely to do so. Indeed, given that imports often pass through the production process in several countries on their way to a final product, it’s plausible that some Chinese exports are going to Mexico and the EU, being incorporated into other products, and then ending up as US imports.

Must Optimism Be Naive? Thoughts from Wynton Marsalis and His Definition of Jazz

There’s a certain more-cynical-than-thou mindset, which sometimes passes for sophistication, in which optimism is derided as naive. I’m not a fan. As someone of the “radical moderate” persuasion, it seems to me possible to steer between a blind optimism that we live in the best of all possible worlds and a blistering cynicism that the world is going to hell in a handbasket.

Perhaps surprisingly, jazz great Wynton Marsalis viewed an underlying optimism as one of the key ingredients of jazz. Here’s a comment from Marsalis in a 2014 interview interview (just a little before the 13:00 mark):

There are three elements of jazz that have to be present. 

One is improvisation, which is the “I” part, freedom to express yourself. The second is Swing, which is the opposite of the “I”, it’s the “us”. Swing is a matter of coordination and balance. It teaches you diplomacy. Yes, you have freedom, but other people have freedom too, so how are you going to have that together? How is your freedom going to go from “yours” to “ours”? Then, the Blues aesthetics is our spiritual view, which is optimism in the face of adversity. An optimism that is not naive. This is life, bad things happen. That’s a fact of being alive. There’s no perfection. If you’re out here, you are paying dues. How do you deal with those dues? How do you use what you have to be resilient and to deepen your humanity through the tragedy and the struggle. And how can you express the depth of that humanity that is earned in a way the will uplift people? The feeling that we call soul comes out of the Blues aesthetic and it’s also an essential ingredient to our music… All three of those things must be present.”

Anyone who listens to my opinions on jazz is an idiot, but for the sake of full disclosure, I’ll just say that my feelings about jazz are similar to my feelings about modern art in general: some of it I like a lot, and some of the more experimental stuff leaves me cold. However, at least according to my beloved wife, my tastes about art in general (modern or not, music, visual, or mixtures of the two) are only very weakly predictable.

My point here is just to note that an underlying optimism in the face of the fact that life brings adversity doesn’t have to be naive. Instead, optimism in the face of adversity can be a productive and beautiful aesthetic.

Disability Weighting

A year or so back, I was having a discussion with a non-economics student about some difficulties in doing benefit-cost analysis, because it can be hard to put values on some outcomes. The student listened to me patiently, and then said: “Well, it’s not like you have to put values on everything.”

And I thought: That feeling is one of the fundamental differences between economists and non-economists. When there are both benefits and costs, and choices about how to proceed, we as individuals and as a society do in fact put values on everything. These values may in many cases be implicit. We don’t explicitly say what it’s worth in dollar terms to, say, make a highway off-ramp safer or to raise the math test scores in an elementary school by a certain amount. But in making decisions about what to do, or not to do, our values are expressed nonetheless. And then economists come along and try to estimate these values explicitly, which can make non-economists deeply uncomfortable.

For an example of explicit weights in an area where noneconomists would be tempted to say “you don’t need to put a number on everything,” consider the “disability weights” from the Global Burden of Disease study produced by the  Institute for Health Metrics and Evaluation (based at the University of Washington, but with research collaborators around the world). To evaluate health outcomes, trends, and policies, it’s clearly not enough just to measure lives lost or life expectancy. One also needs to account for costs of disability and sickness. Thus, the project uses “disability weights:”

Disability weights, which represent the magnitude of health loss associated with specific health outcomes, are used to calculate years lived with disability (YLD) for these outcomes in a given population. The weights are measured on a scale from 0 to 1, where 0 equals a state of full health and 1 equals death. This table provides disability weights for the 440 health states (including combined health states) used to estimate nonfatal health outcomes for the GBD 2021 study.

You can download the full “disability weights” table using data for 2021 here (with free registration). But for example on the scale from zero (perfect health) to one (death), mild diarrhea (three times a day, with some intestinal discomfort) has a weight of .07. Moderate diarrhea (more than three times a day, with severe belly cramps) has a weight of .18. Severe diarrhea (more than three times a day, severe belly cramps, nausea, and thirst) has a weight of .24.

Or as another example, mild early syphilis (low fever) has a weight of .005. However, severe disfigurement, neurological and cardiovascular problem from adult tertiary syphilis has a weight of .54.

Or as one more example, mild Alzheimer’s disease and other dementia (some trouble remembering recent events, hard to concentrate) has a weight of .06. Moderate Alzheimer’s (memory problems, disoriented, needs help with some daily activities) has a weight of .37. Severe Alzheimer’s (complete memory loss, no longer recognizes family members, needs help with all daily activities) has a weight of .44.

Obviously, one can ask questions about where these weights come from and disagree with them. The Global Burden of Disease website has background to try to persuade you that their choices are least reasonable ones, given the inevitable range of uncertainty.

You may also feel a degree of discomfort that isn’t just about the accuracy of these numbers, but about whether it is either unsuitable or impossible to estimate disability weights at all. But remember, how we (as individuals or as a society) think about health trends, as well as what health-related policies we prioritize, or don’t, is based in part on our own disability weights–whether we like putting a number on it or not.

James Tobin on Finance and Social Welfare: 40 Years Ago

The key question is not whether finance has productive uses–of course it does–but whether the US and other high-income economies are devoting more resources to finance than can be justified by the gain in social welfare from these activities. Back in 1984, James Tobin thought the answer might be “yes.” A transcript of lecture, “On the Efficiency of the Financial System,” was published in July 1984 issue of the Lloyds Bank Review. Tobin wrote:

Just the other day, the New York Times listed forty-six business executives whose 1983 compensation (salary and bonus, exclusive of realizations of previously acquired stock options) exceeded one million dollars. What struck me was that sixteen members of this elite were officers of financial companies.’ No wonder, then, that finance is the favourite destination of the undergraduates I teach at Yale, and that 40 per cent of 1983 graduates of our School of Organization and Management took jobs in finance. … All university educators know that finance is engaging a large and growing proportion of the most able young men and women in the country.

Tobin immediately acknowledges the social functions of finance: facilitating transactions, allowing the pool of savings in an economy to be channeled to productive investment in physical and human capital, pooling risk through insurance and hedging, providing incentives for provision of information and analysis that is part of the checks and balances for corporate management and also provides valuations and guidance for existing firms and entrepreneurs. But as Tobin points out, the financial system accomplishes a number of these goals in only a partial way, and in some cases–a recent example would be the Great Recession of 2007-09–breakdowns in the financial system contribute to an overall economic crisis.

I want to point out that the services of the [financial] system do not come cheap. An immense volume of activity takes place, and considerable resources are devoted to it. Let me remind you of some of the relevant magnitudes. Item: The Department of Commerce categories Finance and Insurance generate 4 1/2-5 per cent gnp, account for 5 1/2 per cent of employee compensation, and occupy about 5 per cent of the employed labour force. They account for 7 1/2 per cent of after-tax corporate profits. About 3 per cent of personal consumption, as measured by the Commerce Department, are financial services. These figures do not include the legal profession. It amounts to about 1 per cent of the economy, and a significant fraction of its business is financial in nature.

Is it worth it? Tobin admits that he can offer only suspicions, not a proven case.

Any appraisal of the efficiency of our financial system must reach an equivocal and uncertain verdict. In many respects, as 1 have tried to indicate, the system serves us as individuals and as a society very well indeed. As I have also tried to say, however, it does not merit complacency and self-congratulation either in the industry itself or in the academic professions of economics and finance. …

I confess to an uneasy Physiocratic suspicion, perhaps unbecoming in an academic, that we are throwing more and more of our resources, including the cream of our youth, into financial activities remote from the production of goods and services, into activities that generate high private rewards disproportionate to their social productivity. I suspect that the immense power of the computer is being harnessed to this `paper economy’, not to do the same transactions more economically but to balloon the quantity and variety of financial exchanges. For this reason perhaps, high technology has so far yielded disappointing results in economy-wide productivity. I fear that, as Keynes saw even in his day, the advantages of the liquidity and negotiability of financial instruments come at the cost of facilitating nth-degree speculation which is short-sighted and inefficient.

Many of Tobin’s concerns from 40 years ago have a very modern flavor. In recent years, about one-third of all Harvard and Yale undergraduates (not just those in the School of Management, as in Tobin’s speech) take jobs in finance or consulting after graduation. The Finance, Insurance, and Real Estate sector is now about 7-8% of GDP, substantially higher than 40 years ago.

Perhaps most troubling, it seems as if finance should be a classic economies-of-scale industry. Imagine that I have $10,000 invested in a mutual fund, and then I raise that amount to $20,000. Sure, it costs the managers of the fund something to set up my account, and to inform me about it. But it seems unlikely that the costs of managing $20,000 are twice as much as managing $10,000–or that the costs of managing $100,000 are ten times as much as the costs of managing $100,000. However, the income received by the financial sector has tended to track the total quantity of resources received over time–that is, an industry that seems as if it should be displaying economies of scale does not seem to be doing so (for discussion, see here).

I have no more proof of the inefficiencies of the financial sector than did Tobin, 40 years ago. Indeed, the fact that the same complaints were being made 40 years ago perhaps suggests that the complaints themselves are both real, given the imperfections of any real-world set of markets, and also overblown.

But perhaps being a party who is at the table when financial transactions are made just gives the financial sector a chance to cut off a slice of the pie in every deal. After all, when you are buying tickets to a show or taking out a mortgage or the credit card company is charging the store for your transaction, you don’t have a realistic option in the moment to be shopping around. If a company is issuing bonds, dealing with a private equity, repurchasing stock, or hedging exchange rate risk, it has a limited number of financial parties with whom it will want to deal, and putting out such financial deals to the lowest bidder doesn’t seem realistic. Thinking about who sits in the middle of transactions–what mixture of rules and competition caused them to be there, and what they do to deserve their seat at the transaction–seems worth some thought.

For a variety of modern takes on the question of whether finance is worth it, see the “Symposium on the Growth of the Financial Sector” in the Spring 2013 issue of the Journal of Economic Perspectives (where I work, then and now, as Managing Editor):

The Sweet Touch of Immigration

The World Bank started its World Development Report 2023: Migrants, Refugees, and Societies with this story:

The priestly leaders of the Parsis were brought before the local ruler, Jadhav Rana, who presented them with a vessel full of milk to signify that the surrounding lands could not possibly accommodate any more people. The Parsi head priest responded by slipping some sugar into the milk to signify how the strangers would enrich the local community without displacing them. They would dissolve into life like sugar dissolves in the milk, sweetening the society but not unsettling it. The ruler responded to the eloquent image and granted the exiles land and permission to practice their religion unhindered if they would respect local customs, and learn the local language, Gujarati.
—Parsi legend

Yes, I do actually know that immigration policy is more complicated than this. But it’s also a nice story, with some truth in it, and thus seemed worth sharing.

A US Inflation Round-Trip?

The US inflation rate rose, and now has fallen. How close is the US economy to completing the round trip? For thinking about this question, here are a few useful figures from the posts at the blog of the Council of Economic Advisers.

(For those unfamiliar with the CEA, it’s administratively part of the White House. It has about three dozen employees. The three “members” who are in charge are appointed by the president and confirmed by the Senate. However, the members are mostly from academia and/or think tanks–and they will usually go back to their home institution after a few years. Thus, the members are both political partisans, but also have reason to care about not going too far in a way that would risk their credibility upon their return to their home institution.)

This graph is from a post called “Apples to Äpfel: Recent Inflation Trends in the G7″ (June 27, 2023). The statisticians for different governments measure inflation in slightly different ways in every country (for example, in how they measure the costs of rental and owner-occupied housing). The Harmonized Index of Consumer Prices uses a common measure that is directly comparable across countries.

As the figure illustrates, inflation rose in these major economies, but rose first in the United States. Inflation has seems to have topped out or started declining in the major economies, but the fall started sooner in the United States. (For an updated analysis from some Fed economists, see here.) Why might this be so?

One hint is in a figure from a speech by CEA Chair Jared Bernstein (“Inflation’s (Almost) Roundtrip: What happened, how people experienced it, and what have we learned?” July 30, 2024). The trigger for inflation, as Bernstein emphasizes, is a rise in demand, resulting in substantial part from fiscal stimulus programs during the pandemic, at a time when supply was constrained, both by kinks in global supply chains and by economic activities restricted during the pandemic. Supply chain disruptions like higher energy prices after Russia’s invasion of Ukraine also played a role.

On the horizontal axis, this figure show the cumulative discretionary fiscal boost since just before the pandemic hit at the end of 2019 up to the beginning of 2024. The cumulative boost for the US is much higher than in other countries. Not coincidentally, the US has also had higher real growth of GDP over this time–as well as starting its inflationary process a little sooner than other major economies.

Bernstein argues that the decline in inflation is essentially the supply constraints from the pandemic unwinding. At least in this speech, he does not mention any role played by the Federal Reserve in the timing or speed of the decline in inflation, which showed by its willingness to raise interest rates that it did not intend to allow the surge of inflation to become entrenched.

However, economists and actual humans tend to see a decline in inflation rather differently, a point that Bernstein makes with different phraseology. When inflation falls back toward the 2% goal set by the Federal Reserve, economists have a “mission accomplished” feeling. However, actual humans are aware that prices are higher than they were before, and that their pay may not have kept up, so they remain discontented.

This figure is from a recent CEA post on the “July 2024 CPI Report” (August 14, 2024). It shows how inflation spiked higher than the growth of wages back in late 2021 though early 2023, but for more than a year since then, average wages have been growing faster than consumer price inflation. This is generally good news, since it means that workers are recovering their lost purchasing power, but it is the kind of good news that is unlikely to generate much rejoicing, because “what you lost is being returned to you, slowly” is not much of a celebratory slogan.

I claim no expertise in forecasting the future; indeed, my usual feeling is that I only understand the economy after about a 2-3 year lag time, as evidence clarifies itself. But for the record, the Federal Reserve focuses on a particular measure of inflation based on “personal consumption expenditures,” and it looks at the “core” measure of PCE inflation that leaves out the volatile movements of energy and food prices. Here’s how it looks:

As you can see, the overall rate of this benchmark measure of inflation is declining, but the 12-month average has not yet reached the 2% goal. The Federal Reserve has communicated quite strongly over the last year or two that it intends to keep interest rates relatively high, compared to the last decade or so, to reach the 2% goal. Also, central banks in general very much like to have the credibility that comes from keeping their promises. Bernstein says in his July 30 speech that “inflation’s round trip is not yet complete,” and I suspect the Fed feels the same.

Maya Angelou, Self-Editing and Being Edited

The final product of the poet and writer Maya Angelou (1928-2014) is often exquisite, and part of her genius is her care over language. Here’s a comment about self-editing and being edited from a 1990 interview with George Plimpton (Paris Review, “Maya Angelou, The Art of Fiction No. 119, Issue 116, Fall 1990).

I write in the morning and then go home about midday and take a shower, because writing, as you know, is very hard work, so you have to do a double ablution. Then I go out and shop—I’m a serious cook—and pretend to be normal. I play sane—Good morning! Fine, thank you. And you? And I go home. I prepare dinner for myself and if I have house guests, I do the candles and the pretty music and all that. Then after all the dishes are moved away I read what I wrote that morning. And more often than not if I’ve done nine pages I may be able to save two and a half or three. That’s the cruelest time you know, to really admit that it doesn’t work. And to blue pencil it. When I finish maybe fifty pages and read them—fifty acceptable pages—it’s not too bad. I’ve had the same editor since 1967. Many times he has said to me over the years or asked me, Why would you use a semicolon instead of a colon? And many times over the years I have said to him things like: I will never speak to you again. Forever. Goodbye. That is it. Thank you very much. And I leave. Then I read the piece and I think of his suggestions. I send him a telegram that says, OK, so you’re right. So what? Don’t ever mention this to me again. If you do, I will never speak to you again. About two years ago I was visiting him and his wife in the Hamptons. I was at the end of a dining room table with a sit-down dinner of about fourteen people. Way at the end I said to someone, I sent him telegrams over the years. From the other end of the table he said, And I’ve kept every one! Brute! But the editing, one’s own editing, before the editor sees it, is the most important.

Remember, this quotation is just Angelou talking, not Angelou after a bout of rigorous self-editing. As a person, I’m not confident that I could ask Angelou to change even a punctuation mark; as someone who has worked as an editor for several decades, albeit for an academic journal of economics, I confess that I would be unable to avoid offering suggestions.

Telephone Operators: The Elimination of a Job

My tradition on this blog is to take a break (mostly!) from current events in the later part of August. Instead, I pre-schedule daily posts based on things I read during the previous year about three of my preoccupations: economics, editing/writing, and academia. With the posts pre-scheduled, I can then relax more deeply when floating on my back in a Minnesota lake, staring up at the sky.

______

One of the classic examples of jobs lost to automation is the case of telephone switchboard operators. As late as 1950, there were about 350,000 women working as switchboard operators working for phone company, and maybe another million working as switchboard operators at offices , factories, hotels, and apartments. Roughly one of every 13 working women was a switchboard operator. Of course, now the number of switchboard operators is nearly zero. The example is often given to point out that in a dynamic economy, even when hundreds of thousands of jobs are “lost,” workers do manage to transition to new jobs.

But that basic story lacks detail. James Feigenbaum and Daniel P. Gross have been digging into two aspects: 1) What happened to the women who were displaced from switchboard operator jobs; and 2) for AT&T, what determined speed and timing of investing in automation to replace switchboard operators?

Feigenbaum and Gross tackle the first question in “Answering the Call of Automation: How the Labor Market Adjusted to Mechanizing Telephone Operation” (Quarterly Journal of Economics, August 2024, 139: 3, pp. 1879–1939). They focus on the period between 1920 and 1940, with data on 3,000 cities. During this period, over 300 cities switched to mechanized switchboards. You can then compare the labor market in cities that switched sooner, later, and not at all. during this period. You can then compare the labor market for women across these different situations. They find:

As a first step, we show that after a city was cut over to mechanical [switchboard] operation, the number of 16- to 25-year-old women in subsequent cohorts employed as telephone operators immediately fell by 50% to 80%. These jobs made up around 2% of employment for this group, and even more for those under age 20—and given turnover rates, this shock may have foreclosed entry-level job opportunities for as much as 10% to 15% of peak cohorts. The effect of this shock on incumbent operators was to dis- possess many of their jobs and careers: telephone operators in cities with cutovers were less likely to be in the same job the next decade we observe them, less likely to be working at all, and conditional on working were more likely to be in lower-paying occupations. In contrast, however, automation did not reduce employment rates in subsequent cohorts of young women, who found work in other sectors—including jobs with similar demographics and wages (such as typists and secretaries), and some with lower wages (such as food service workers). … Though wage data for this era are more limited, using available data we also do not find evidence that local labor markets re-equilibrated at significantly lower wages.

The stability of both employment rates and wages is consistent with demand growing for these categories of workers in other sectors of the economy—and, in turn, with the predictions of Acemoglu and Restrepo (2018) , who suggest that firms will endogenously develop new uses for labor when automation makes it abundant. Buttressing this interpretation, our evidence indicates some occupations expanded to new sectors of local economies after cutovers—that is, the emergence of new work (Autor et al. 2024 ). Taken together, these results suggest that although existing workers may be exposed to job loss, local economies can adjust to large automation shocks over medium horizons.

The overall message is a conventional one for economists. Yes, labor markets do get disrupted, sometimes severely. There is a case here during the transition for government programs to provide some combination of unemployment insurance and training for other jobs. But the ultimate answer is growth of other employment opportunities.

Feigenbaum and Gross discuss the automation of switchboard operations from the perspective of AT&T in “Organizational and Economic Obstacles to Automation: A Cautionary Tale from AT&T in the Twentieth Century” (Management Science, published online in advance of being assigned to a specific issue, February 27, 2024). They point out a puzzle of timing: mechanical call switching technology is invented in the 1880s. However, AT&T doesn’t install the first dial telephones until 30 years later at the Chesapeake & Potomac Telephone Co. in Norfolk, Virginia, in 1919, and the process of phasing out all of the switchboard operators isn’t completed until 1978. Why did it take so long?

Telephone systems were initially designed to have operators physically connecting calls—a task known as “call switching”— putting them at the center of both the telephone network and AT&T’s production system. Manual switching, in turn, shaped choices and activities across the business, including service offerings, plant and equipment, operations, prices, accounting, billing, customer relations, and more.

Although manual switching served early telephone networks well, expansion revealed its limits, as its complexity rose quickly in large markets with billions of possible connections, and switchboards became system bottlenecks. As AT&T grew, its service quality thus fell, and operator requirements exploded: by the 1920s AT&T was the largest U.S. employer, with operators over half its workforce. Company records show the limits of manual switching were known as early as the 1900s, when automatic technology was already being tested—yet it took AT&T several more decades to adopt it widely. We show in this paper that automation was hindered by interdependencies between call switching and the rest of AT&T’s business: the mechanization of call switching required complementary innovation and adaptation across the firm, which were only resolved over time.

In retrospect, one wonders if AT&T would have moved faster to mechanical switching if it wasn’t a monopoly! But there is also a more charitable lesson here. Big innovations require widespread organizational adjustment. Such adjustments often require a substantial up-front investment–some of it monetary, some of it an investment in organizational change in business practices. There are firms and government agencies out there that are still adjusting to information technology and the web, and just starting to make widespread use of tools that have been around for a decade or more. Even if the new artificial intelligence innovations turn out to be the greatest thing since sliced bread, it will take years and decades for them to filter through the economy–indeed, after discovering a new technology, one often has follow-up discoveries of additional uses for that technology.