Interview with Daron Acemoglu: Technology and the Shape of Growth

Michael Chui and Anna Bernasek of the McKinsey Global Institute have a half-hour interview with Daron Acemoglu. “Forward Thinking on technology and political economy with Daron Acemoglu” (July 14, 2021 audio and an edited transcript available). Acemoglu has focused for decades on the idea the idea of “directed technological change”–that is, the idea that the direction of technology is not a random event determined by scientists, but is to some extent a response to the incentives of what areas are being investigated by researchers and the incentives for firms and entrepreneurs in applying new ideas in the economy.

In Acemoglu’s view, economists of the past too often treated “technology” as a general all-purpose ingredient that tended to raise productivity of workers. In contrast, Acemoglu points out that technology often has quite particular effects. He says:

And if you look at the way that economists think about technology, it’s this latent variable that makes you just more productive. But very few technologies actually do that. Electricity didn’t make workers more productive. It made some functions in factories more feasible, and some few items more productive. A hammer doesn’t make you more productive in everything. It makes you just more productive in one single, simple task—hammering a nail. And many technologies don’t even do that. The example of spinning and weaving machinery that I gave, or the factory system, or, more recently, databases, software, robots, numerically controlled machines, they are mostly about replacing workers in certain tasks that they used to perform. …

It may benefit some workers more than others. It may well be that computers augment educated workers more than high school dropouts, so inequality can increase. But at the end you shouldn’t see the high school dropouts lose out. Their real wages shouldn’t decline. And the real wages of workers shouldn’t decline. But, in fact, one of the striking but very robust features of the last 40 years of economic development in the US and the UK has been that many groups, especially low-education or middle-education men, have actually seen their earnings fall, some groups by as much as 25 percent, in real terms, since 1980. Phenomenal. This isn’t the American dream.

In the traditional economics approach, this is a nuisance that we often sweep under the carpet. ,,, [I]it is something that doesn’t really fit into this technology as augmenting framework. But when technology, at least in part, is about automation, replacing, displacing workers from their tasks, then this happens quite often. You can have productivity improvements—capital benefits, firms benefit, but workers, especially some types of workers, all workers overall can lose out in real terms. …

[O]nce you go to this micro level, then the direction of technology, the future of technology looked at through the perspective of what type of technologies we’re going to build on, that becomes much richer and much more interesting. It’s not just whether we’re going to increase the productivity of skilled workers versus unskilled workers, both of which benefit all of them since they are complementary. It’s more like, are we going to completely give up on unskilled workers? Are we going to try to replace them? Are we going to try to replace humans? Are we going to create new tasks for humans? How are you going to use the AI platform? All of these questions about the direction of technology become much more alive, and then also the productivity implications become much more interesting.

Acemoglu points out that from the 1950s through the 1970s, the US economy had a high rate of technological change and productivity growth with a pretty stable distribution of income. But since the 1980s, technological change and productivity growth have been accompanied by a more unequal distribution of income. What changed? Acemoglu argues that the overall effects of technological change will be determined by factors like whether it gives rise to new sectors of the economy with opportunities for displaced workers of the present and future workers, and whether the technologies generate large gains in productivity (think of large-scale mechanization of agriculture and how it raised output per acre) or if it just allows replacing workers with machines. In Acemoglu’s view, too much of the innovation surrounding modern automation, robotics and artificial intelligence is about replacing existing workers, rather than empowering new industries.

The details of an appropriate policy response here are hard to enunciate, and Acemoglu (wisely) shies away from offering granular recommendation. But he does offer these general thoughts:

One is we have to free ourselves from the excessive obsession on automation. It is true in the area of AI. It’s true in other areas, too. [In] our current business community, for a variety of reasons, some of it is cost cutting, some of it is because where the technology leaders in Silicon Valley have sort of set the agenda, some of it is because government policies are just too focused on automating everything.

Instead, we have to come back to a world in which we put as much effort in increasing human productivity, both in the tasks that they already produce, but also creating new tasks in entertainment, in healthcare. There are so many new things that we can do, especially with AI, but some of it with just our existing technologies, some of it with virtual reality or augmented reality. There are many, many things ranging from judgment, social skills, flexibility, creativity, that humans are so much better at than machines.

But we’re not empowering them right now. That’s the first leg. That second leg is that we also have to pull back from using AI as a method of control. And again, that’s about how we use the AI technology. Do we use it to empower individuals? To be better communicators, better masters of their own choices and data? Be able to sort of understand the veracity or the liability of different types of information? Or do we develop these tools in the hands of platforms so that the platforms themselves are doing all of that thinking and all of that direction for the individuals? I think that those two are very different futures as well.

For a more detailed version of Acemoglu’s argument, a useful starting point is his essay with Pascual Restrepo, “Automation and New Tasks: How Technology Displaces and Reinstates Labor,” in the Spring 2019 issue of the Journal of Economic Perspectives (where I work as Managing Editor). Basically, they suggest a framework in which automation can have three possible effects on the tasks that are involved in doing a job: a displacement effect, when automation replaces a task previously done by a worker; a productivity effect in which the higher productivity from automation taking over certain tasks leads to more buying power in the economy, creating jobs in other sectors; and a reinstatement effect, when new technology reshuffles the production process in a way that leads to new tasks that will be done by labor. They then apply this framework to US data.

That JEP symposium includes three other papers:

Where is Behavioral Economics Now?

Behavioral economics is the combination of insights from psychology with economic behavior. A usual starting point for models of economic actors is that while they will sometimes make wrong decisions, they will not make the same wrong decisions over and over. To put it another way, they will continually be trying to avoid what they now perceive as the errors of the past, while also committing a range of mistakes.

But psychological research suggests that in certain settings, many people will make the same mistake over and over. For example, many people have personal or economic habits (say, exercise, or eating healthier, or quitting smoking, or saving more) that they would like to change. They know with some part of their mind that if they don’t make a change, they are likely to regret it in the future. Nonetheless, they keep deciding they will start the desired change tomorrow, rather than today. There are many of these biases that, while one can learn to overcome them, seem built into cognition. People often do a poor job of thinking about risks that have a low probability of happening. Behavioral economics looks at the outcome of economic situations where some or many people have these biases.

For those who want to learn what behavioral economics is all about, a good starting point is The Behavioral Economics Guide 2021, edited by Alain Samson. It’s from the behavioraleconomics.com website, which serves as an online hub for people interested in the topic. I recommend the volume for at several purposes.

There’s a 30+ page glossary of behavioral science concepts, for those who would like a little help with the lingo, starting with “action bias” and ending with the “zero price effect.” Here’s “action bias:”

Some core ideas in behavioral economics focus on people’s propensity to do nothing, as evident in default bias and status quo bias. Inaction may be due to a number of factors, including inertia or anticipated regret. However, sometimes people have an impulse to act in order to gain a sense of control over a situation and eliminate a problem. This has been termed the action bias (Patt & Zeckhauser, 2000). For example, a person may opt for a medica treatment rather than a no-treatment alternative, even though clinical trials have not supported the treatment’s effectiveness. Action bias is particularly likely to occur if we do something for others or others expect us to act (see social norm), as illustrated by the tendency for soccer goal keepers to jump to left or right on penalty kicks, even though statistically they would be better off if they just stayed in the middle of the goal (Bar-Eli et al., 2007). Action bias may also be more likely among overconfident individuals or if a person has experienced prior negative outcomes (Zeelenberg et al., 2002), where subsequent inaction would be a failure to do something to improve the situation.

Here’s the definition of “zero price effect:”

The zero price effect suggests that traditional cost-benefits models cannot account for the psychological effect of getting something for free. A linear model assumes that changes in cost are the same at all price levels and benefits stay the same. As a result, a decrease in price will make a good equally more or less attractive at all price points. The zero price model, on the other hand, suggests that there will be an increase in a good’s intrinsic value when the price is reduced to zero (Shampanier et al., 2007). Free goods have extra pulling power, as a reduction in price from $1 to zero is more powerful than a reduction from $2 to $1. This is particularly true for hedonic products—things that give us pleasure or enjoyment (e.g. Hossain & Saini, 2015). A core psychological explanation for the zero price effect has been the affect heuristic, whereby options that have no downside (no cost) trigger a more positive affective response.

If you have a drop of social scientist blood in your veins, descriptions like this will start your pulse pounding. Do these effects really exist? In what context? How would you measure them? Are these effects intertwined with a different or perhaps broader effect? If you need concrete examples, the bulk of the report is 15 short and readable summaries of recent studies in the area, with examples concerning prevention of gender-based violence, banking and insurance, sustainable agriculture, career coaching, and more.

One theme that emerges from several papers in the volume is that behavioral economics effects are often deeply rooted in a particular context: that is, you can’t just grab an item from the glossary, plug it into your life or business or organization, and assume you know how it will work. For example, Florian Bauer and Manuel Wätjen write about their research in “Tired of Behavioral Economics? How to Prevent
the Hype Around Behavioral Economics From Turning Into Disillusionment.” They write:

Applying the behavioral economics effects found in academic experiments to marketing is becoming more and more popular. However, there is increasing evidence that copy-and-pasting academic effects does not achieve the desired effects in real life. This article aims to show that this is not because customers are becoming wise to nudges or that behavioral economics does not work at all, but because the application of behavioral economics typically ignores the contextual aspects of the actual decision to be influenced. Herein, we present a framework that considers these aspects and helps develop more effective behavioral interventions in marketing, pricing, and sales.

John A. List makes an argument that is similar in tone but broader in his introduction to the volume, “The Voltage Effect in Behavioral Economics.” List points out that it is fairly common for someone to latch on to an academic study in behavior economics, but then are disappointed when it doesn’t seem to “scale up” to a real-world context. List writes:

Indeed, most of us think that scalable ideas have some ‘silver bullet’ feature, i.e., some quality that bestows a ‘can’t miss’ appeal. That kind of thinking is fundamentally wrong. There is no single quality that distinguishes ideas that have the potential to succeed at scale with those that do not do so. In this manner, moving from an initial research study to one that will have an attractive benefit cost profile at scale is much more complex than most imagine. And, in most cases, scaling produces a voltage drop—the original BE [behavioral economics] insights lose considerable voltage when scaled. The problem, ex ante, is determining whether (and why) that voltage drop will occur. … What this lesson inherently means is that scaling, in the end, is a weakest link problem: the endeavor is only as strong as the weakest link in the chain.

In other words, the connection from an academic study to a real-world application involves a number of links in a chain. List lays out five of them:

  1. Infererence. Perhaps the academic study you looked at was a “false positive”–that is, the result won’t hold up in other similar studies. List suggests that before believing an effect is real, one should look for “three or four well-powered independent replications of the original finding.”
  2. Representativeness of the population. If a study was done on a group of college sophomores, or retirees, or people with a certain medical condition, you can’t necessarily assume that it apply well on other groups, like working-age adults or high school dropouts.
  3. Representativeness of the situation. For example, a small-scale program with a small group of dedicated and trained participants may not scale up well to a larger general population group that is less dedicated and less well-trained.
  4. Spillovers and general equilibrium effects of scaling. Say that I start a program in a certain area to teach a highly desirable skill to some workers. Those workers get much higher pay as a result. So then I expand the program very substantially to teach this skill to many more workers. The original group did well in part because it was a small group, and the skill was still scarce. But at some point, as the skill becomes common in that are, the rewards will be lower.
  5. Marginal cost considerations. As program expands, there are two possibilities. One is that there are economies of scale: that is, as the program covers more people, average cost per person falls. For example, a web-based program that can be expanded to cover many more people might have this property. However, the other possibility is that as the program covers more people, average cost per person rises. This can happen if the program needs some specific and particular skills that may be hard to get: for example, perhaps I can train a small group of teachers who volunteer to be part of my new specific curriculum, but as I try to train bigger and bigger groups, it gets harder.

List draws on Tolstoy to summarizes his thinking. At the beginning of Anna Karenina, Tolstoy famously wrote: “All happy families are alike; each unhappy family is unhappy in its own way.” List echoes: “[A]ll successfully scaled ideas are alike; all unsuccessfully scaled ideas fail in their own way.” I would just add that from this perspective, one of the advantages of behavioral economics is that it helps to drag social science generalities down into the realness of the particular.

As R&D Goes Global, How Should the US Respond?

There was a time, about a half-century ago, when a majority of the world’s research and development happened within the borders of the United State. Researchers, inventors, and firms didn’t move much. What was invented in the US had a tendency to remain, at least for awhile, as domestic US economic activity. Those times are now behind us.

Bruce R. Guile and Caroline S. Wagner raise this issue in “A New S&T Policy for a New Global Reality” (Issues in Science and Technology, Summer 2021). Their essay is part of the beginning of a promised series of essays on the “The Next 75 Years of Science Policy.” They write:

In the past, countries depended on the low mobility of researchers, inventors, and entrepreneurs to link R&D to innovation and innovation to wealth creation. When researchers were less mobile and less engaged in close collaboration with peers in other nations, new knowledge tended to be retained by institutions and the countries that housed them. From a national perspective, this arrangement had the benefit of aligning intellectual property ownership, early applications, and company growth with the location of the R&D activity. … 

At the same time that global collaboration has become ubiquitous, the rest of the world has begun doing more research. During the 1960s, US public and private R&D investment accounted for almost 70% of the global total. Today, even though US spending has increased, US R&D is less than 30% of the world’s total. Twenty nations now match or exceed US R&D intensity, with public and private R&D spending in these countries near or in excess of 2% of gross domestic product per year. In absolute dollars, China spends approximately the same amount on R&D as the United States. Furthermore, according to figures from the Organisation for Economic Co-operation and Development, China has nearly 2 million full-time equivalent researchers now, compared with the United States’ 1.5 million.

The common pattern is now that research and development happens in many places around the world, often coordinated by large companies, but also by the movement of academic and corporate researchers between places and organization.

Simultaneously, US multinational corporations have established global networks of research laboratories, research university relationships, and talent recruitment efforts that partially decouple them from the science and engineering enterprise in the United States. Virtually every technologically advanced manufactured good is created by a production process (supply chain) that crosses national borders several if not dozens of times and draws on innovations from many countries.

Being first with new scientific knowledge or having a pioneering innovative company based in the United States does not guarantee success in domestic industry. Nor does it guarantee that the nation will capture substantial economic value from the new knowledge. In a globalized knowledge network, knowledge spreads so quickly and widely that being in “first place” is a notional distinction at best. New scientific and engineering knowledge and innovation cross US borders in both directions—as part of commercial exchanges and collaborations—every day. Economic value cannot be captured by erecting barriers to the flow of knowledge or trade as the United States needs new knowledge and innovation from outside its borders as much as it needs robust US-based scientific and engineering capability.

Thus, the US economy is confronted with two realities. One is that a rising standard of living in the future, which in turn would make it so much easier to address our ongoing economic social and problems than the alternative of not having a rising standard of living, depends on the increases in productivity that grow in substantial part from new technology. But the other reality is that relying just on US-created R&D is going to be a poor strategy, because US-created R&D (like all R&D), is flowing much more easily around the world than ever before. What are the implications of this situation?

Guile and Wagner argue that it still very much matters for the US to have cutting-edge domestic R&D capabilities, but in the modern world, this means being an attractive location for researchers from around the world–and adjusting immigration policies accordingly. It also means systematic and expanded ” US government support for tracking and monitoring research activity and output, regardless of where it occurs, and support for dissemination of that information to US-based companies and centers of research.” Finally, it means paying more attention to the ingredients needed for the US economy to capitalize on new R&D, which implies a focus on policies for a workforce with the necessary skills, as well as the design of tax, investment, and trade policies.

For a couple of earlier posts on global R&D, with additional evidence and discussion of these themes, see:

“Global R&D: The Stagnant US Position” (February 7, 2020)

“Global R&D: An Overview” (February 11, 2013)

The Inflation Spurt: The CEA on Historical Parallels

Reporting on the Consumer Price Price Index numbers for June looked like this:

  • Consumer prices increased 5.4% in June from a year earlier, the biggest monthly gain since August 2008.
  • Excluding food and energy, inflation increased 4.5%, the largest move since September 1991.

Is the spurt of inflation the last few months likely to be sustained, or to fade? Chair of the White House Council of Economic Advisers Cecilia Rouse, along with Jeffery Zhang, and Ernie Tedeschi last week offered a short rumination on the “Historical Parallels to Today’s Inflationary Episode” at the CEA website (July 6, 2021).

Here’s a figure showing two prominent measures of inflation: the Consumer Price Index and the Personal Consumption Expenditures Index. The CPI is better-known to the public, but the PCE index is the one on which the Federal Reserve actually focuses (for my quick discussion of the differences between them, see here). For practical purposes, they move together fairly closely over time. On the far right, you can see the recent jump of inflation up to an annual rate of about 5%.

As the authors note, this figure shows six previous periods in which CPI inflation topped 5%: 1946–48, 1950–51, 1969–71, 1973–82, 1991, and 2008.  For the current episode, which parallel is most relevant? They argue that recent inflation spikes have been linked to rises in global oil prices, which hasn’t been happening, and that the inflation jump of the late 1960s was linked to an economy running white-hot with rapid growth and low unemployment, which isn’t the current problem, either.

The authors argue that the inflation of the 1970s was mainly the result of higher oil prices, as well, and while they are only writing a short piece, this seems to me like a significant oversimplification. There is sound evidence that in the lead-up to the 1972 election, the Federal Reserve under Arthur Burns responded to political pressure from President Richard Nixon by keeping interest rates very low in what was already an inflationary environment. There were also national wage and price controls first imposed and then removed by Nixon, and a later attempt to impose credit controls under Jimmy Carter. Of course, the 1970s spikes in oil prices didn’t help, either. But the inflationary period of the 1970s had number of contributing factors.

Ultimately, Rouse, Zhang, and Tedeschi argue that the most relevant parallel here is the post-WWII inflation episode. There are of course many differences. But as they point out, both the World War II period and the pandemic were times when savings rose substantially, suggesting the possibility of pent-up demand ready to surge into markets. Both episodes were also a time when economic dislocations meant that certain goods were not readily available; for example, the pandemic has been accompanied by shortages of durable goods like cars, in part because “manufacturing capabilities were temporarily shut down or reduced to avoid COVID contagion.”

It’s worth noting that in this short essay, Rouse, Zhang, and Tedeschi do not discuss the large and ongoing rise in federal debt. nor the role of the Federal Reserve in financing that debt, nor the potential for additional inflationary pressures from the existing spending proposals now before Congress. One can certainly sketch out a scenario along these lines where inflationary dangers look more severe. In particular, if consumers and firms expect a certain level of inflation, and then build that inflation rate into their expectations of future increases in prices and wages, then shorter-term inflation could build a longer-lasting momentum.

However, neither the markets nor the professionals seem to believe these scenarios. For looking at market expectations of inflation, you can compare two similar types of debt, one which adjusts for inflation and one which does not. For example, the US Treasury issues most of its debt in a way that does not adjust for inflation, but it also issues Treasury Inflation Protected Securities (TIPS), which adjust according to changes in the Consumer Price Index. Comparing these two will provide a market-based estimate of expectations of future inflation. Another measure of inflation looks at predictions from professional forecasters, which are tabulated in the Livingston Survey carried out by the Federal Reserve Bank of Philadelphia. Here are expectations of inflation from these two measures.

The two measures as presented here are not exactly comparable, because the market-based inflation expectations are for five years ahead, while the Livingston survey is for 10 years ahead. But either way, it looks as if future inflation expectations are in the range of about 2%, which is often taken as a reasonable working definition of “price stability.”

Those interested in some additional history of US inflation in the last 100 years or so might be interested in my post from a few years back, “100 Years of Consumer Price Inflation Data” (A[ril 30, 2014).

Snapshots of Global Wealth

The Credit Suisse Research Institute has published Global Wealth Report 2021, its annual report looking at levels and distribution of wealth around the world (June 2021). Here are a few snapshots and comments from the report.

This figure shows the wealth pyramid. As the report notes, “Our calculations suggest, for example, that a
person needed net assets of just USD 7,552 to be among the wealthiest half of world citizens at end-2020. However, USD 129,624 was required to be a member of the top 10% of global wealth holders, and USD 1,055,337 to belong to the top 1%.”

This image has an empty alt attribute; its file name is global-wealth-1.jpg

Here’s a summary of the wealth pyramid from the report:

We estimate that 2.9 billion individuals – 55% of all adults in the world – had wealth below USD 10,000 in 2020. The next segment, covering those with wealth in the range of USD 10,000–100,000,has seen the biggest rise in numbers this century, more than trebling in size from 507 million in 2000 to 1.7 billion in mid-2020. This reflects the growing prosperity of emerging economies, especially China, and the expansion of the middle class in the developing world. The average wealth of this group is USD 33,414, slightly less than half the level of average wealth worldwide. Total assets amounting to USD 57.3 trillion provide this segment with considerable economic leverage. …

The upper-middle segment, with wealth ranging from USD 100,000 to USD 1 million, has also expanded significantly this century, from 208 million to 583 million. They currently own net assets totaling USD 163.9 trillion or 39.1% of global wealth, which is nearly four times their share of the adult population. The middle class in developed nations typically belong to this group. Above them, the top tier of high net worth (HNW) individuals (i.e. USD millionaires) remains relatively small in size, but has expanded rapidly in recent years. It now numbers 56 million, or 1.1% of all adults. … HNW adults are increasingly dominant in terms of total wealth ownership and their share of global wealth. The aggregate wealth of HNW adults has grown nearly four-fold from USD 41.5 trillion in 2000 to USD 191.6 trillion in 2020, and their share of global wealth has risen from 35% to 46% over the same period.

What about if we look by region? This next figure is perhaps a little harder to think about, The horizontal axis shows the percentile in the global wealth distribution. The vertical axis shows the location of people in each region in that wealth distribution. The top panel shows the data for 2000; the second panel shows data for 2020, so you can see the shift. For example, back in 2000 the bulk of China’s population was in the 40th-80th percentiles of the global wealth distribution By 2020, China has much larger share in the 60th-95th percentiles

Finally, here’s some data by country, in a way that also shows some some interesting patterns and some reasons to be cautious about the data. This table shows the share of people in a given country who are millionaires. Remember, this is a measure of millionaires by wealth, not income. Wealth includes, for example, the savings accumulated in a retirement account and the equity in your home. Thus, this table does not say that 8.8% of Americans have $1 million in income last year. It says that when you add up retirements accounts, real estate equity, and other financial assets, 8.8% of Americans have more than $1 million in wealth. Presumably a disproportionate number of the people in this group are retired, and would have much less than $1 million in annual income.

Two points about the table are worth noting. First , I found it interesting–given how many people I run into who profess to like the income distribution patterns of northern European countries like–to see that Sweden and Denmark are much higher on millionaires per capita than, say, Canada, France, Germany, and the United Kingdom. But as a second point, the numbers in this table all measure millionaires as measured in US dollars, and thus require using an exchange rate to convert from the local currency to US dollars. As a result, shifts in the number of millionaires can easily reflect shifts in the US dollar exchange rate, rather than shifts in local currency.

Some Snapshots of World Energy Use

Every year since 1952, the BP Statistical Review of World Energy provides a volume full of tables and charts about energy usage around the world. What are some patterns that emerge from the just-released 2021 report?  Here’s an overview comment from the introduction by chief executive officer Bernard Looney:

Global energy demand is estimated to have fallen by 4.5% in 2020. This is the largest recession since the end of World War II, driven by an unprecedented collapse in oil demand, as the imposition of lockdowns around the world decimated transport-related demand. The drop in oil consumption accounted for around three-quarters of the total decline in energy demand. …  The fall in carbon emissions from energy use was equally striking, with emissions falling by over 6% in 2020 – again, the largest decline since 1945. Although unmatched in modern peacetime, the rate of decline in carbon emissions last year is similar to what the world needs to average each year for the next 30 years to be on track to meet the aims of the Paris Agreement.

That last sentence is worth some consideration. It’s easy for international conferences and countries and states and cities to make promises about future reductions in carbon emissions, and in particular to announce with fanfare goals for net-zero carbon emissions that are two or three decades in the future. But achieving those goals would require the immediate embrace of dramatic and ongoing changes in a way that would last for several decades, which in all honesty, just isn’t actually happening anywhere.

Here’s a figure showing sources of “primary energy” for the world as a whole. This includes all commercially traded sources of energy: that is, it’s a broad meausure that includes transportation, commercial and industry use, and homes.

As the report summarizes this figure: “Oil continues to hold the largest share of the energy mix (31.2%). Coal is the second largest fuel in 2020, accounting for 27.2% of total primary energy consumption, a slight increase from 27.1% in the previous year. The share of both natural gas and renewables rose to record highs of 24.7% and 5.7% respectively. Renewables has now overtaken nuclear which makes up only 4.3% of the energy mix. Hydro’s share of energy increased by 0.4 percentage points last year to 6.9%, the first increase since 2014.”

I’d put it another way: there are three big energy sources each with about 25-30% of the global market, and they are all fossil fuels: oil, coal, and natural gas. There are three much smaller energy sources, all carbon-free but each with well under than 10% of the global market: hydroelectricity, nuclear, and renewables (like solar, wind, and geothermal).

Energy use around the world is highly unequal, and as lower- and middle-income countries develop it seems virtually certain that their demand for energy will rise. Here’s a figure showing energy consumption per capita by region of the world. For the world as a whole (gray line), energy use is 71 gigajoules per person. For North America, it’s about three times as much; for Europe, the Middle East, and the Commonwealth of Independent States (mainly Russia) it’s twice as much. Other regions are still below the global average.

This figure shows the distribution of energy use per capita across countries. It’s a cumulative distribution: thus, the green line for 2018 is lower at the bottom left of the figure and higher at the top right, showing that a smaller share of the world population is in countries with very low energy consumption. The big vertical jump in the middle of the green line is the population of China. As the report notes: “In 2020 63.7% of the global population lived in countries where average energy demand per capita was less than 100 GJ/head, a significant decrease from 81% in 2019, as energy demand per capita in China increased to 101 GJ/head from 99 GJ/head in 2019.” As the figure shows, about 80% of the global population consumes 100 gigajoules per person or less, while the top few percent of the global population consume three times that much energy

In terms of global carbon emissions, about one-third of all the emissions are from high-income countries, and two-thirds from low-income countries. Here’s a table that I condenses from a larger table in the report. The US accounts for 13.8% of global carbon emissions, and total US carbon emissions have been falling 0.5% per year for the decade up to 2019, with a bigger fall during the pandemic. Carbon emissions for Europe are similar, although they have been declining about twice as fast. However, the Asia/Pacific region accounts for more than half of all global carbon emissions, and before the pandemic carbon emissions from this region were growing at 2.6% per year over the decade up to 2019. China alone is 30.7% of global carbon emissions.

In short, no plan for reducing global carbon emissions can be considered serious if it doesn’t have a heavy focus on the Asia/Pacific region and on China. Moreover, any such plan can’t just replace existing sources of energy, but also needs to show how total energy consumption can rise dramatically–for this region and in other low-income countries around the world–while still putting carbon emissions on a downward trajectory.

Deep Regional and Bilateral Trade Agreements

There was a time, not so very long ago, when discussions of international trade focused on global talks held through the World Trade Organization. But the so-called “Doha round” of global trade that started back in 2001 seemed almost dead even a decade ago, and it hasn’t jolted back into life since then.

James McBride, Andrew Chatzky, and Anshu Siriparapu recently updated their “Backgrounder” article for the Council of Foreign Relations on “What’s Next for the WTO?” (June 14, 2021). As they point out, the main multilateral success for WTO in the last decade was a Trade Facilitation Agreement back in 2015 to speed up and coordinate customs for imported products. They also point out that the WTO has shifted from worldwide “multilateral” agreements to “plurilateral” agreements among smaller groups of interested countries focused on a specific set of products. For example, 14 countries pursued an Environmental Goods Agreement about liberalizing trade in products like wind turbines and solar panels and 19 countries are signed on to a Government Procurement Agreement which tries to assure that governments will consider buying imported products when that would benefit taxpayers, rather than just funneling their purchasing power to domestic producers.

The broader shift is that international trade agreements used to be focused on reducing explicit barriers to trade, like tariffs and import quotas. The new wave of regional trade agreements is focused on agreements about assuring common regulations and standards throughout an industry, which can then facilitate global supply chains. These kinds of rules go far deeper than just reducing tariffs and import quotas. For example, imagine the set of health and safe rules that applies if ingredients for a pharmaceutical product are going to be made in different countries and traded across borders, or the set of regulations that applies if insurance or banking services are going to be internationally traded.

These “deep trade agreements” seem both necessary for trade in a modern global economy, and also necessarily more controversial. They are increasingly necessary because world trade is less and less about basic commodities like oil or timber, or basic goods like clothing, and more and more about complex goods and services. But of course, the more a country agrees to detailed agreements for trade, or demands such detailed agreements as a condition for trade, the greater the likelihood that the agreed-upon rules will not actually be about advancing trade, but instead will be about domestic firms and various special interests trying to piggyback their own agendas on to the trade agreement. Or to put it more simply: The analysis of changing tariffs or import quotas is fairly straightforward; the analysis of hundreds of pages of detailed regulations governing trade is messy.

In a recent e-book, Ana Margarida Fernandes, Nadia Rocha, and Michele Ruta have have edited an e-book of essays on these topics titled The Economics of Deep Trade Agreements (CEPR Press, June 2021). Many of the essays provide readable overviews of the key findings of recent research, and I’ll provide a full Table of Contents for the book below. But as example, here are a couple of thoughts from the essay “Lobbying on Deep Trade Agreements: How Large Firms Buy Favourable Provisions,” by Michael Blanga-Gubbay Paola Conconi, In Song Kim, and Mathieu Parenti.

The authors point to the sharp rise in the number of regional and bilateral trade agreements in the last 25 years::

The authors point to a recent data set that contains detailed information on the provisions of current trade agreements in 17 areas. As they write:

Some of these deep trade issues are related to trade policies: Export Restrictions, Rules of Origin, Trade Facilitation and Customs, and Trade Remedies. Others concern non-trade policies: Intellectual Property Rights, Sanitary and Phytosanitary Measures, Technical Barriers to Trade, Public Procurement, Subsidies, Services Investment, Movement of Capital, Visa and Asylum, State Owned Enterprises, Competition Policy, Environmental Laws, and Labour Market Regulations.

In addition, several of the essays point out, the question of how (or whether) the rules will be enforced is of key significance: Domestic courts in the exporting or importing country? Arbitrators? International tribunals? The authors also refer to an essay by Dani Rodrik in the Spring 2018 issue of the Journal of Economic Perspectives (where I work as Managing Editor).

To illustrate the increase in depth of trade agreements, Rodrik (2018) compares US trade agreements with two small nations, Israel and Singapore, signed two decades apart. The US-Israel agreement, which went into force in 1985, is less than 8,000 words in length and contains 22 articles and three annexes, mostly devoted to trade issues such as tariffs, agricultural restrictions, import licensing, and rules of origin. By contrast, the US-Singapore agreement, which went into force in 2004, is nearly ten times as long, taking up 70,000 words and containing 20 chapters, more than a dozen annexes, and multiple side letters. Of its 20 chapters, seven cover conventional trade topics, while the others deal with behind-the-border topics. For example, provisions on intellectual property rights take up a third of a page (and 81 words) in the US-Israel agreement; they occupy 23 pages (and 8,737 words) plus two side letters in the US-Singapore agreement.

_____________

Here’s the full Table of Contents for the book:

“Introduction: The economics of deep trade agreements,” by Ana Margarida Fernandes, Nadia Rocha, and Michele Ruta

The economic impact of deep trade agreements
1 “The enduring role of international integration in development,” by Pinelopi Goldberg and Tristan Reed
2 “Quantifying the impact of deep trade agreements: A general equilibrium
approach,” by Lionel Fontagné,Nadia Rocha, Michel Ruta, and Gianluca Santoni
3 “Using machine learning to assess the impact of deep trade agreements,” by Holger Breinlich, Valentina Corradi, Nadia Rocha, Michele Ruta,
João. M.C. Santos Silva, and Tom Zylkin

Political economy and design of deep trade agreements
4 “Lobbying on Deep Trade Agreements: How Large Firms Buy
Favourable Provisions,” by Michael Blanga-Gubbay, Paola Conconi, In Song Kim, and Mathieu Parenti
5 “Global value chains and deep integration,” by Leonardo Baccini, Matteo Fiorini, Bernard Hoekman, Carlo Altomonte, and Italo Colantone
6 “Pro-competitive provisions in deep trade agreements,” by Meredith A. Crowley, Lu Han, and Thomas Prayer


Protectionism at the border and beyond
7 “The impact of preferential trade agreements on the duration of antidumping protection,” by Thomas J. Prusa, and Min Zhu
8 “Trade facilitation provisions in deep trade agreements: Impact on Peru’s exporters,” by Woori Lee, Nadia Rocha, and Michele Ruta
9 “Heterogeneous impacts of sanitary and phyto-sanitary and technical barriers to trade regulations: Firm-level evidence from deep trade agreements,” by Ana Margarida Fernandes,Kevin Lefebvre, and Nadia Rocha


Services trade and state intervention
10 “Scoping services trade agreements: What really matters,” by Ingo Borchert and Mattia Di Ubaldo
11 “Trade barriers in government procurement,” by Alen Mulabdic and Lorenzo Rotunno
12 “The spillover effect of deep trade agreements on Chinese state-owned enterprises,” by Kevin Lefebvre, Nadia Rocha, and Michele Ruta


Non-trade issues in trade agreements
13 “How preferential trade agreements with strong intellectual property provisions affect trade,” Keith E. Maskus and William Ridley
14 “Deep integration in trade agreements: Labour clauses, tariffs, and trade flows,” by Raymond Robertson
15 “Trade agreements with environmental provisions mitigate deforestation,” Ryan Abman, Clark Lundberg, and Michele Ruta


Conclusion
16 Why deep trade agreements may shape post-COVID-19 trade,” by Aaditya Mattoo, Nadia Rocha, and Michele Ruta

Interview with Erica Groshen: Improving Labor Market Statistics

Bill Kerr interviews Erica Groshen, who ran the Bureau of Labor Statistics from 2014-17, on topics involving how to improve labor market statistics in particular and government statistics overall in a Harvard Business School podcast titled “Infrastructure: Upgrading the US labor statistics system” (June 30, 2021, audio and a transcript available).

A lot of economic data has traditionally been collected by household surveys: that is, asking people what they earn, how many hours they work per week, how they spend their money, what government benefits they receive, and so on. These surveys are carefully designed and conducted, but at the end of the day, they have an irreducible amount of measurement error–because you are relying on people to remember and report accurately. Thus, there has has been a shift in recent decades to “administrative” data–that is, data collected by government agencies for other purposes like taxes or Social Security records–which seems likely to be more accurate than household surveys. Thus, Groshen points out that the unemployment insurance system could be expanded and converted into a way of gathering much more accurate data on labor markets and jobs. She notes:

[S]tate unemployment insurance agencies that, as part of running the program, collect worker wage records every quarter from every employer that lists the wages of workers for every month during that quarter. They also collect claims records from people who apply for claims. And these data are generally not available to BLS to augment or replace its current data collections. And that’s basically a shame, because it would be quite useful for statistical purposes. And employers, of course, have to report the same or slightly different data to a number of different government agencies. Our economics statistics are also not as good as they could be as a consequence of this. UI [unemployment insurance] wage records include who the person’s employer is and their earnings—that’s what’s in there. They should have job title, because that is closely associated with the person’s occupation. … And this would enable us to track workforce supply and demand much more closely, make better projections about the future of work. You also would want the number of hours worked for the wages that are being reported so that you know if someone is full time or part time, so you can get hourly rates, and really follow that dimension on which wages vary. Another thing you want is the actual work location of the people … And then, the last thing, particularly in these times of understanding demographic inequities—racial inequities, in particular, but also gender inequities, things like that—you want to have demographics so that you can track social justice issues and advances and understand how the world of work is affecting demographic outcomes. These data should also, of course, be curated—by which I mean, they have to clean them up so that you can really analyze them and made accessible to the statistical agencies, for particular with the BLS, so that they can create better statistics. You could get better, cheaper, and more-frequent program-policy evaluations so that policy makers could make better decisions.

What are some of the payoffs of this approach? One set of gains is that improved data can improve public policy. As a recent example, during the pandemic

I’ll give you an example of one of the things that happens when you don’t have the attention of national statisticians to administrative data. You can think about the unemployment insurance initial claims releases. There’s been a lot of attention to that during the course of the pandemic, because it’s some of the most-timely data and very closely associated with what was going on in the labor market. But those are administrative totals. They are not constructed to be economic indicators. And most of the people paying attention to them were looking for an economic indicator. The solution is clear, which is to have BLS partner with the unemployment insurance system and take over production of the creation of economic indicators from this inputted information a new program that takes advantage of the skills of a national statistical agency to input that data and create an economic indicator that wouldn’t require all of the journalists and all of the economists everywhere else to say, “Well, let me make this adjustment; maybe that’ll tell me what’s really going on.”

Another payoff, further in the future, is that workers with access to their own personal records could have a work history authenticated by the records of their past employers.

They want to come up with a mechanism to provide workers with portable, authoritative job records that they could tap into for applications, for jobs, for educations, for UI benefits, and other public programs as well. Anytime when they say, “What’s your work history?” the worker would be able to plug in their ID and their password or something like that, and they’d have an authenticated work history with skills information and duration and other information on it. 

As another example of a gap in labor market data that I’ve become aware of in my own reading, we know very little about workplace skills and the extent of on-the-job training. As Groshen says:

We know how to count years of education, and we have some ideas on quality. And we know test scores and things like that. But we actually don’t track workplace skills very well—either on the micro or the macro level—for individuals or for the country or groups as a whole. A very large component of skills training is done by employers. And we have no good measures of that. We don’t know who gets what kind of training. We don’t know what our skill gaps are. And so how can we be making the right decisions if we don’t have that information? The last authoritative source of employer-provided training was by the BLS, and it was done in 1995. Congress hasn’t funded it since then. 

Given that government statistics are produced as part of a bureaucracy, one way to produce better statistics is to improve the status of statisticians in the hierarchy. Groshen suggests:

There are many steps we can take to strengthen the independence of statistical agencies. And one of them would be to put them in their own agency outside of the control of a member of the cabinet, have it be headed up by the national statistician of the U.S.—and that’s a job that exists, but right now, the national statistician of the U.S. has an office of about seven people in OMB [the Office of Management and Budget]. So the alternative is put the national statistician actually in charge of the statistical agencies and move them away from reporting to the people who are in charge of policy directly.

There’s more in the interview about the productivity slowdown, the current churning in US labor markets and other topics.

Living with Hyperinflation: Lebanon Stories

The World Bank Lebanon Economic Monitor for Spring 2021 is subtitled “Lebanon Sinking (To the Top Three).” The reason for that somewhat cryptic title is described in the opening lines of the report:

The Lebanon financial and economic crisis is likely to rank in the top 10, possibly top three, most severe crises episodes globally since the mid-nineteenth century … [when] contrasted with the most severe global crises episodes as observed by Reinhart and Rogoff (2014) over the 1857–2013 period. In fact, Lebanon’s GDP plummeted from close to US$ 55 billion in 2018 to an estimated US$ 33 billion in 2020, with US$ GDP/per capita falling by around 40 percent. …

Overall, the World Bank Average Exchange Rate (AER) depreciated by 129 percent in 2020. Exchange rate pass through effects on prices have resulted in surging inflation, averaging 84.3 percent in 2020. Meanwhile, the stock of currency in circulation increased by 197 percent, even as broad money supply (which includes bank deposits) declined, with the latter weighed down by deleveraging in the financial sector.

There’s a lot to be said here about the dramatic rise in poverty and unrest, with implications beyond economics for political conditions in the Middle East region as a whole. Here, I want to focus on what happens when a society is faced with extremely high inflation rates.

Here, I’ll focus on an essay by Paul Wood, “What happens when your currency collapses? The Lebanese are living through a terrible economic experiment” (Spectator World, June 23, 2001):

The Lebanese currency has lost more than 90 percent of its value over the past 18 months and is continuing its steady decline. It would be foolish to keep more than a few days’ spending money on hand, so everyone has a moneychanger. Mine is Mohammed, who pops round on his moped with ever-fatter stacks of notes with ever more zeros on them. The currency grows physically as it shrinks in value. He passes over a wad of cash and says, smiling: ‘Our leaders are stupid and corrupt.’ That’s true, but only part of the story of what has gone wrong.

Mohammed is here because no one uses the banks to change money anymore. That would be crazy when the official exchange rate is still 1,500 lira to the dollar, one-tenth of the black-market rate of just over 15,000 to the dollar (at the time of writing). Mohammed used to be a chef, a job where he made things that people wanted, adding a small but tangible amount to the nation’s wealth. Now he’s one of thousands of people employed in the completely useless but absolutely indispensable business of ferrying stacks of printed paper back and forth by moped, to make up for the catastrophic failure of the banking system. It’s one small example of the inefficiencies that creep into an economy when you can’t trust the money anymore.

Wood has the kinds of stories you expect to hear in this setting: long lines forming for basic goods like food and groceries, taking a literal backpack full of money to the hospital to buy prescription drugs, and so on.

The point I would emphasize here is that when there is inflation at more than negligible levels, people and firms do need to worry about it. You need to worry if your paycheck is keeping up. Firms need to worry about price increases from their suppliers and raising prices to their customers. Borrowing money becomes fraught: Will your interest rate adjust and soar with additional inflation? The energy going into keeping pace with inflation is energy diverted from actual productive pursuits: education and learning skills, investing, cutting costs, research, and more.

For some previous examples from the last decade of nations coping with hyperinflation, interested readers might begin with For those who want more, here’s an earlier discussion of “Hyperinflation and the Venezuela Example” (April 28, 2016). Here’s a discussion of “Hyperinflation and the Zimbabwe Example” (March 5, 2012). Or for an historical list of such episodes, see “58 Episodes of Hyperinflation (Venezuela is #23)” (February 2, 2019).

From a little further back, one of the most succinct explanations of inflation and short-termism that I know appeared in an essay written back in 1992 by V.S. Naipaul, called “Argentina and the Ghost of Eva Peron,” in which Naipaul purportedly quoted “Jorge” on the situation of hyperinflation in Argentina. Here, I’m quoting from the essay as reprinted in the 2003 collection of Naipaul’s travel writing, The Writer and the World.

Another aspect of inflation is that you cease to worry about productivity and even technology. Now, that is the secret of all progress: productivity. But you really can get no more than 3 or 4 percent per annum improvement in productivity anywhere in the world. With inflation like ours you can get 10 per cent in one day, if you know when and where to invest. … It is much more important to protect your working capital than to think about long-term things like technology and productivity–although you try to do both.  So capital investment in Argentina is not even covering wear and tear. In short, when the current plant reaches the end of its working life there won’t be a provision built up to purchase new capital equipment. This is the inevitable result of inflation, which is the monetary disease. Your money is disintegrating. It’s like cancer. You live day to day. That’s all you can do when you have inflation of more than 1 per cent per day. You cease to plan, You’re just happy to make it to the weekend.

Are Global Supply Chains Going to Bounce Back?

As the pandemic eventually wanes, a major question is what economic patterns will bounce back. Pol Antràs argues that global supply chains are likely to bounce back. His argument comes in two parts: they they have bounced back before after the Great Recession of 2007-2009; and the large multinational firms that have made a long-term investment in these global supply chains will not quickly change their plans. Here are his comments from an interview with Luis Garicano, “Is globalisation slowing down?” The interview appears in a recent e-book book by Luis Garicano: Capitalism After COVID: Conversations with 21 Economists (June 2021, CEPR Press). Here’s Antràs:

One reassuring thing is that, contrary to expectations, trade flows and international exchanges have recovered.

A big chunk of world trade is associated with a few hundred companies that are large and that have complex global value chains. When the crisis came and things came to a halt, they didn’t reorganise things massively. For example, they didn’t start changing suppliers very dramatically, so that when things went back to normal, they could scale up again easily. We saw that from previous studies that looked at the Great Recession or previous studies that looked at the Asian financial crisis. These are large shocks. But even when those very large shocks happen, very often we don’t see international links being broken up. It’s very complex to set up value chains, so people tend to stick together and try to ride it out. The current crisis may have led companies to reassess what they should be doing – maybe they should move out of China and set up plants in their own country – but when they crunch the numbers, they may realise that it’s very expensive to build new plants, new equipment. There are such economies of scale that it’s very costly to move things around.

I must confess, you might remember back in 2008 and 2009 when the crisis was starting … That was probably the first time I was talking to the press. I was all sure of myself, and was warning that the recession would lead to depressed trade flows for the next five to ten years. Because it’s so hard to create those links, if the crisis breaks them up you can’t expect things to reappear quickly. But six months later, I was looking like a fool because things picked up very quickly.

I had missed that firms are not dumb. Firms realise that resetting value chains is very costly, so they hold together. If the supplier was in trouble, they would extend the line of credit just to keep things alive so that when things go back to normal, they can scale up. When, a couple of years later, people started looking at the microdata, what we call the extensive margin, the trade links did not move much. It was all an intensive margin – shut down and then back up. For the current covid crisis it’s a bit early to tell, but Asier Minondo (2020), a Spanish economist, has looked at Spanish data and found that things have recovered very quickly because the adjustment has been 95% of the intensive margin.

So I do think that it takes very large and persistent shocks to lead to a reorganisation of production, not even a Great Recession is enough. Policy shocks that are likely to persist are going to lead to this. But Covid to me, and especially at this point, looks like something that in a year we’re out of it.


.

_________________