In Hume’s spirit, I will attempt to serve as an ambassador from my world of economics, and help in “finding topics of conversation fit for the entertainment of rational creatures.”
The total annual budget for the Journal of Economic Perspectives, where I work as Managing Editor, is projected to be $736,000 in 2022. If you go back 15 years to 2007, you find that total expenses for the journal were $863,000. Why has the nominal budget dropped by 15% over 15 years? The answer is online publication.
(If the overall costs of running this academic journal seem almost ludicrously low for those of you who deal with much larger budgets in academic or business settings, remember that we don’t pay authors anything at all. )
Back in 2007, the JEP still printed about 20,000 copies of each issue–which had dropped off from about 25,000 a few years earlier. The costs of paper, printing, and postage were almost half the total cost of the journal. Now we print less than 4,000 copies of each issue, which are mostly mailed to libraries. The cost savings from printing fewer copies are dramatic. Of course, other costs do rise over time. But over the last 15 years, the rise in other costs like salaries and web maintenance has not offset the drop in printing and mailing costs.
However, since 2011 the Journal of Economic Perspectives has been freely available online. The publisher, the American Economic Association, now publishes nine journals, which are bundled together as one subscription for members and libraries. The AEA recognized that few economists were becoming members of the AEA and few libraries were subscribing to AEA publications just to get the Journal of Economic Perspectives. Thus, making the journal freely available did not involve a revenue loss. All the archives back to the first issue in 1987 are freely available, too.
About 2.5 million individual JEP articles are downloaded annually from the American Economic Association website. It’s also possible to download an entire issue: about 70,000 entire issues were downloaded last year. These totals don’t count downloads of JEP articles from other sources: for example, JEP articles were downloaded about 700,000 times from JSTOR last year, and many JEP article are available by being posted on personal websites or as part of reading lists, too.
It’s not obvious how to calculate a gain in productivity for the JEP over the last 15 years. The quantity of labor going into the journal is roughly the same. In physical form, the journal looks much the same: that is, roughly the same number of pages and articles each year. But with free online distribution, it seems clear that the JEP is vastly more available to the universe of possible readers around the world than it was 15 years ago, at a total cost that is substantially lower. This kind of productivity gain will not be reflected in official productivity statistics, but similar changes are happening in many ways, not just academic journals.
(The total publication expenses for the Journal of Economic Perspectives, along with an overview of the budget for the American Economic Association as a whole, are published each year in the “Report of the Treasurer” which appears in the annual (for example, May 2022, “Report of the Treasurer,” pp. 650-654).
There is a long-standing if low-simmering controversy about whether tax breaks for charitable giving should even exist. The standard argument for a tax break is that when people give money to an officially recognized nonprofit, rather than consuming it themselves, they are in part contributing to the broader good–and so some tax break is appropriate. The counterargument is that when you donate, you are making a choice about your own income and preferences. If a Harvard or Princeton or Stanford alum decides to make big-money donations to their already-wealthy university, is that really worth a tax break? If a wealthy family donates to an arts organization that just happens to employ a couple of their family members as “associate director of marketing” or a similar title, is that worth a tax break?
Historically, the tax break for charitable contributions has been in the form of a tax deduction: for those not up-to-speed on the US tax system, this means that your donation to charity can be deducted from the income on which you owe taxes. However, the US tax system has long been constructed so that it doesn’t pay for most people to “itemize” their deductions, and these tax deductions end up applying mostly to the well-off. In addition, since those with higher incomes face higher marginal tax rates, their tax benefit from charitable giving is higher too. With these issues in mind, the history of the tax deduction for charitable giving is mostly about very high-income families.
Steuerle cites estimates that “changes to the tax law in 2017 reduced the federal government’s total individual income tax subsidies for charitable giving by about 30 percent—from an average subsidy of about 21 cents to 15 cents per dollar contributed.” Moreover, “the current charitable tax incentive now benefits only about one-tenth of all households, mainly those with higher incomes. I doubt seriously that the public will long support a deduction so narrowly applied.”
What might be done? One option, of course, would be to see this as an opportunity to simplify the tax code by getting rid of this provision–which, remember, is benefitting only 10% of taxpayers who tend to have the very highest incomes. Steuerle goes the other way, and thinks instead about how to rebuild the tax incentives.
As he points out, Congress went with a poorly designed idea for encouraging tax deductions in 2020: a provision under which every tax unit (itemizing deductions or not) could deduct $300 from their income for charitable giving. The IRS had no realistic way to check whether these small-scale contributions had actually happened. Moreover, the cap of $300 didn’t offer any incentive for people to give more than that. Instead, non-itemizers who were already giving small amounts (and who knew about this provision) could just get some money off on their taxes. By Steuerle’s estimates, this provision cost the US government $1.5 billion, while increasing charitable giving by $100 million.
To design a policy with better tradeoffs between government revenue and incentives for giving, the key is to limit the tax break for smaller amounts of charitable giving–much of which would have happened whether the tax break existed or not–but still provide an incentive for expanding one’s giving. Thus, Steuerle discusses a charitable giving tax deduction that would be available to all, but it would only apply to those who give, say, more than 1-2% of their income to charity. Also, remember that, say, 2% of income is a lot more in absolute dollars for a high-income tax payer than for the average person, so this requirement helps to limit the share of the tax break flowing to those with very income levels.
For example, his research finds that converting the current charitable tax deduction that applies only to the high-income 10% of taxpayers to a universal deduction for all that applies only to charitable deductions above 1.9% of income would (under the law in place after 2017 but before the many revisions to tax law during COVID-19) leave overall government tax revenue unchanged–but would increase charitable giving by about $2.5 billion: “With so many taxpayers ineligible today for a charitable deduction, Congress would still be significantly expanding the number of people who get a deduction.”
Steuerle offers a few other ideas, based on behavioral economics, about changes that might increase charitable contributions. For example, the current rule is that when you are filling out your taxes in early 2022, you can only count charitable contributions made in 2021. But what if you could make a decision while filling out your 2022 taxes to give some money to charity and see the immediate effect on the taxes you owe? Of course, you could only take such a tax deduction in a single year.
Another idea involves lottery winners–or others who receive a major one-time income windfall. Perhaps those people should be able to donate immediately a share of their winnings without any limit?
One could also eliminate the tax break, but instead have the government provide a “match” for charitable contributions: registered charitable organizations would then report to the government what they had collected in donations, and the government would cut them a check for an additional percentage amount. The idea here is that if the government wants to subsidize charitable giving, maybe it doesn’t need to happen through the tax code.
“Technological progress is neither good nor bad, nor is it neutral.” This is known as Kranzberg’s law. It was Melvin Kranzberg who said that, and people keep citing that, although nobody quite knows what he meant.
Economic progress and the small minority
Economic growth and economic progress is not driven by the masses. It is not driven by the population at large. It is driven by a small minority of people who economists refer to in their funny language as upper-tail people, meaning if you think of the world following some kind of bell-shaped or normal distribution, it’s the elite, it’s the people who are educated—not necessarily intellectuals. They could be engineers, they could be mechanics, they could be applied mathematicians. …
[I]f you look at the top 2 percent or 3 percent of the population anywhere, those are the people that are driving economic growth. And that’s still the case. I mean, in the United States, much of the technological progress they’ve been experiencing has been driven by a fairly small number of people. Some of them are Caltech geeks, and some of them are just really good people who are coming up with novel ideas, but basically that’s what it is about. …
And it’s not just about the steam engine or the mule or anything like that, it’s about ideas that try to manipulate nature in a way that benefits humans. And so, I’ll give you one example—it’s not machinery, but it is very critical. It’s vaccination against smallpox, which is very much on people’s minds these days, right? But this is an 18th-century idea. This English country doctor, Edward Jenner, basically came up with this idea. It’s not a machine in any way, but it is a pathbreaking, I would say a radical idea, of how to use what we know about nature to improve human life. And that’s what economic growth in the end is all about. Now, it’s not all human life, it’s material things. It’s how not to get sick, how to get more to eat, how to have better clothing, better housing, to heat your place, to be warmer in the winter and cooler in the summer. It’s about all these things that define our material comfort and our material wellbeing.
Underestimated Productivity Gains in an Information Age
[T]he real problem is that most of the important contributions to economic welfare are often seriously, seriously, seriously underestimated in our procedures. And I believe that they are getting more and more underestimated. If the degree of underestimation is more or less constant, then you don’t care because over time if it isn’t changing over time, you can still see what the trend looks like. But I think that’s not right. I think we are more and more underestimated because the knowledge economy and the digital economy are famously subject to underestimation. …
I mean, just look at the enormous gain in human welfare that we have achieved because we were able to come up with vaccines against corona. Now, it’s not a net addition to GDP because before that we didn’t have corona, but think about the subtraction we would’ve had if it wasn’t for that. And so, I remain a technological optimist, but I’m also very much aware that measures that measure technological progress in a system that was designed for an economy that produced wheat and steel aren’t appropriate for an economy that produces high-tech things that are produced by a knowledge economy. …
Library Technology
[T]hings have changed dramatically in how I do my work. … I used to go to the library four days a week and I would spend 20 to 25 hours a week at least in the shelves, pulling out books, pulling out articles, pulling out journals. I barely go to the library anymore. I mean, [why should I? Everything in bloody hell is on my screen here.
Every article I want to look at—even things that aren’t even published—I Google them. Even books published 200 years ago appear on my screen as PDFs. Many of them searchable. I’d be crazy to go to the library. And so the way I do my research, the way I write my own stuff has changed dramatically. My best research assistant is sitting right here on my desk, and it’s my laptop. I mean, that is an amazing thing. Somebody would’ve told me this when I was a graduate student, I would’ve laughed him out of the room. Of course you need to go to the library. When I was a graduate student, I lived in the library. I was there 12 hours a day. I mean, these things have changed dramatically. The way you go to the dentist, the way you go to the doctors, you go to the hospital, you do your shopping—everything has changed.
This last point rings especially true for me, in my work as an editor. I used to have to spend at least half-a-day each week in the library, tracking down articles so that I would have a better understanding of the papers I was editing. Now, pretty much all of those articles–and older books, as well–are easily available through my access to an online college library.
But what Mokyr doesn’t emphasize here is that this change in work patterns is not necessarily an overall savings of time. Because it has become so much easier to look things up, I’m much more likely to do so. My guess is that more than 100% of the time savings from not going to the library and pulling volumes off the shelves now goes into time spent checking and doing background reading on a wider array of articles and data. My productivity is higher, but so is the workload that I impose on myself.
For a sense of how Angrist thinks about these causality issues, here are a couple of useful starting points in the Journal of Economic Perspectives (where I work as Managing Editor):
These really are conversations between Andrews, Angrist, and Imbens, not lectures. No powerpoint presentations or equations are used! No proofs are involved. It’s just a chance to hear these folks talk about their field and more broadly how they see economics.
Why hasn’t the United States adopted the metric system for widespread use? I’ve generally thought there were two reasons. One is that with the enormous US internal market, there was less incentive to follow international measurement standards. The other was that the US has long had a brash and rebellious streak, a “you’re not the boss of me” vibe, which means that there will inevitably be pushback against some external measurement system invented by a French guy and run an international committee based in a Paris suburb.
However, Stephen Mihm makes a persuasive case that my internal monologue about the metric system is wrong, or at least seriously incomplete, in “Inching toward Modernity: Industrial Standards and the Fate of the Metric System in the United States” (Business History Review, Spring 2022, pp. 47-76, needs a library subscription to access). Mihm focuses on the early battles over US adoption of the metric system, waged in the 19th and early 20th century. He makes the case that the metric system was in fact blocked by university-trained engineers and management, with the support of big manufacturing firms.
The metric system is part of US policy discussions in the early 1800s, after it is adopted in France. Mihm writes:
By the 1810s, most commentators considered the metric system a failed experiment. One writer in 1813 noted that the French government, despite wielding considerably more power over their own populace than the United States could, nonetheless failed to secure adoption of the metric units. “The new measures . . . are on the counter . . . but the transactions are regulated by the old.” In 1819, a House of Representatives committee studying the issue concurred in this assessment, pointing to France’s failure to secure widespread adoption of the metric system.
Throughout the 19th century, there was an ongoing discussion about appropriate systems of weights and measures, and the metric system was part of those discussions. But the battle-lines for this dispute began to be clarified in 1860s. The growth of industrialization across the United States meant that there was also a movement across US industries and engineers to standardize measurements in areas like screw threads, nuts and bolts, sheet metal, wire, and pipe–so that it was possible for a manufacturing firm to use inputs from a variety of different suppliers around the country. confident that the parts would fit together. A similar movement arose in the railroad industry, standardizing axles, couplings, valves, and other elements so that rolling stock would fit together. This movement was led by a mixture of mechanical engineers and management experts, and it was based on the inch as the standard unit of measure. At least at this time, it’s fair to say that most people cared much less about measures of weight or volume.
But a number of scientists and social reformers preferred the logical organization of the metric system. Mihm reports that in 1863, “the newly created National Academy of Sciences recommended that the United States adopt the metric system. That same year, the United States participated in international congresses on postage and statistics that endorsed the metric system for both scientific and commercial purposes.” Federal legislation passed in 1866 to legalize the use of the metric system.
The struggle over how the US measuring system would be standardized then evolves from the late 1800s up through the early 20th century. Mihm lays out the details. For example, in 1873 the prominent educator and president of Columbia University, Frederick Barnard, founded the the American Metrological Society to push for the metric system. In 1874, a American Railway Master Mechanics’ Association instead pushed for the already-developed system of standardization based on inches. He said: “While French savants were laboring to build up this decimal system of interchangeable measures … the better class of American mechanics were solving the problem of making machinery with interchangeable parts.”
The dispute got a little weird at times. Mihm tells of the International Institute for Preserving and Protecting Weights and Measures, founded in 1879, which promoted “Great Pyramid metrology,” defined as “a belief that the Egyptians had inscribed the inch as a sacred unit of measurement in the design of their famed structures. … Over the 1870s and 1880s, pyramid metrology channeled much of the opposition to the metric system in the United States.” Lest this seem a little whacky to us, remember that this is a time when scientists and engineers were also exploring mesmerism and divining rods. To put it another way, being a logically rigorous scientist or engineer in one area does not rule out more imaginative approaches to other topics, then or now.
The central practical issue became what economists call “path-dependency.” Imagine two different paths for standardization. Perhaps in the abstract one is preferable. But if you have already committed to the other path, and all your machine tools and existing equipment are based on that alternative path, and all your workers and suppliers and customers are using that other path–then the costs of transition to the other approach are formidable. Indeed, the longer you wait to make the switch, the more committed you are to the path you are on. For example, if you have laid down pipelines for water and oil measured in an inch-based system, as well as set up train tracks and rolling stock based on that system, then you are going to have physical equipment for an inch-based system around for decades.
The metric issue kept bubbling along. “In 1896, the House of Representatives considered a bill that mandated the immediate, exclusive use of the metric system in the federal government, with the rest of the country to follow suit a few years later.” It almost passed, with relatively little attention, but was concern that the risk of disrupting industrial production in an election year wasn’t a political winner. When the US Bureau of Standards was created in 1901, the administrators preferred the metric system, but engineers and big companies pushed back hard.
By the early 20th century, remember that this argument had now been going on for decades. US industry had already felt firmly committed to an inch-based system of measurement back in the 1860s and 1870s; by the early 1900s, the idea of redoing all of their capital stock in the metric system seemed crazy to them. Indeed, US manufacturing was so dominant in the world at this time that US companies routinely exported inch-based equipment to companies in countries that were nominally on the metric system already. Some US manufacturers even argued that the unique inch-based measurement system helped to protect them from foreign competitors.
Bills for the metric system kept coming up in Congress in the early 1900s, and being shot down. Mihm writes:
In 1916, these efforts culminated in the creation of a new anti-metric organization known as the American Institute of Weights and Measures. … Much of its success can be attributed to a sophisticated public relations campaign. It placed advertisements and editorials in industry journals; successfully lobbied hundreds of trade associations, chambers of commerce, and technical societies to go on the record condemning mandatory use of the metric system; and obsessively monitored legislation on the local, state, and national levels. When the group identified a bill that endorsed mandatory metric conversion—or merely contained clauses that opened the door to greater reliance on the metric system—it mobilized hundreds of industrialists, engineers, and managers to defeat the legislation with letters, testimony, and editorials. By the 1920s, its membership rolls included many of the most important firms in the nation as well as presidents of the National Association of Manufacturers, the Association of American Steel Manufacturers, the American Railroad Association, and other national organizations. These organizations had a stake in standardization, actively joining government-sponsored efforts to bring further uniformity to the nation’s economy over the course of the 1920s. As inch-based standards governing everything from automobile tires to pads of paper became the norm, the prospects for going metric became ever more remote. Only in scattered pockets of the business community—the electrical field, for example, and pharmaceuticals—did the metric system become dominant.
We have now reached an odd point in the US experience where two measurement systems co-exist: the inch-based traditional system, along with pint and gallons, ounces and pounds, is how most Americana talk, most of the time, in ordinary life, but the metric system is how all science and most business operates (with the exception of the building trades). Many Americans step back and forth between the two systems of measurement every day in their personal and work lives, barely noticing.
Some readers will be interested to know that this issue of Business History Review has othre paper about standardization. Here’s the list of papers. The introductory essay by Yates and Murphy is open access:
Back in 1972, Robert E. Lucas (Nobel ’95) published a paper called “Expectations and the neutrality of money,” in the Journal of Economic Theory (4:2, 103–124). The paper was already standard on macroeconomics reading lists when I started graduate school in 1982, and I suspect it’s still there. For the 50th anniversary, the Journal of Economic Methodology (22:1, 2022) has published a six-paper symposium on Lucas’s work and the 1972 paper in particular.
Reading the heavily mathematical 1972 paper isn’t easy, and summarizing it isn’t easy, either. But at some substantial risk of oversimplifying, it addresses a big question. Why does policy by a central bank like the Federal Reserve affect the real economy? In the long-run, there is a widely-held belief (backed by a solid if not indisputable array of evidence) that in the long-run, money is a “veil” over real economic activity: that is, money facilitates economic transactions, but at over time it is preferences and technologies, working through forces of supply and demand, that determine real economic outcomes. To put it another way, changes in money will alter the overall price level over long-term time horizons, but it is “neutral” to real economic outcomes.
However, when the Federal Reserve or other central banks conduct monetary policy, it clearly does have an effect on the real economy. When a central bank lowers interest rates and makes credit available in a recession, the length of the recession seems diminished. Today, the concern is that if the central bank raises interest rates, it may cause an economic slowdown or recession. Apparently money is not just a veil over real activity, at least not in the short-run. But why not?
One possible answer here is that people are bad at anticipating the future. Thus, when the Fed stimulates the economy in the short-run, people don’t recognize that this stimulus might lead to inflation. When the Fed was spurring the economy during the pandemic recession in 2020 and into 2021, relatively few people were anticipating higher inflation. But for economists, the theory that monetary policy depends on people and markets being perpetually bad at understanding what’s going on feels like, at best, a partial answer.
Thus, in the 1972 paper, Lucas tried a different approach. He wanted to construct an example–that is, a model–of an economy where all the agents are fully rational. For economists, “rational” doesn’t mean you are always correct. It just means that you take advantage of all available information in making decisions–and as a consequence, you won’t make the same mistake over and over again. Thus, the central bank can’t “fool” these rational agents by juicing up the economy with low interest rates.
However, that phrase “all available information” opens up a possibility. What if economic agents do not and cannot have full information about what’s happening in the economy? In particular, say that when a number of prices rise, it’s hard for an economic agent to know if this is a “real” change because of forces of supply and demand or if it’s a “monetary” change of generally higher price levels. In summer and fall 2021, for example, it was hard to tell whether the higher prices were a “real” result of supply chains fracturing, or a “monetary” result of an overstimulated economy and a generalized inflation.
At the end of the 1972 paper, Lucas writes:
These rational agents are thus placed in a setting in which the information conveyed to traders by market prices is inadequate to permit them to distinguish real from monetary disturbances. In this setting, monetary fluctuations lead to real output movements in the same direction. In order for this resolution to carry any conviction, it has been necessary to adopt a framework simple enough to permit a precise specification of the information available to each trader at each point in time, and to facilitate verification of the rationality of each reader’s behavior. To obtain this simplicity, most of the interesting features of the observed business cycle have been abstracted from, with one notable exception: the Phillips curve emerges not as an unexplained empirical fact, but as a central feature of the solution to a general equilibrium system.
This Lucas paper is heavy on mathematics and will be a tough read for the uninitiated. It has come to be called an “island economy,” although the word “island” doesn’t actually appear in the 1972 paper, because in a mathematical sense the economic actors have a hard time distinguishing between what is on their own “island” and the broader economy. Information is spread out. It’s hard to perceive accurately what’s happening.
The emphasis on what information people have, and how they form their expectations about inflation–when filtered through decades of research since 1972–has had a substantial effect on real-world economic policy is conducted. It meant that monetary policy had a greater emphasis on expectations of inflation and how those expectations are formed. At present, for example, a key question for the Federal Reserve is the extent to which expectations of a higher inflation rate are becoming embedded throughout the economy in price-setting, wage-setting, and interest rates–because entrenched inflationary expectations would pose a different policy problem. The emphasis in monetary policy about rules that will be followed over time (like at target inflation rate of 2%) or “forward guidance” about how monetary policy will evolve in the future are both focused on addressing people’s expectations about future inflation. Keeping central banks reasonably independent from the political process can also be viewed as a way of reassuring people that even if politicians with a short-run focus control federal spending and borrowing, the central bank will be allowed to follow its own course–which again influences the expectations that people have about future inflation.
I should add that the Lucas (1972) paper became just one in a vast literature exploring the reasons why it might be hard for people to distinguish between changes in prices that arise from supply and demand and the changes that are part of an overall inflation. For example, some prices might be preset for certain time by past contracts, and when you know that some prices cannot adjust for a certain time, but others can, figuring out the real and monetary distinctions again become tricky.
In the Journal of Economic Methodology symposium, my guess is that a number of economist-readers may be most interested in the personal essays by Thomas J. Sargent (Nobel ’11) on “Learning from Lucas” and Harald Uhlig on “The lasting influence of Robert E. Lucas on Chicago economics,” both of which are fully of descriptions of how Lucas influenced the intellectual journey of the authors.
I found particular interest in the first essay in the symposium, “Lucas’s way to his monetary theory of large-scale fluctuations,” by Peter Galbács. The focus here is not on the legacy of Lucas’s work but on the earlier research leading up to it. Some of this ground is also covered in Lucas’s Nobel lecture on “Monetary Neutrality.” Galbács writes in the conclusion:
The way Lucas arrived at his monetary island-model framework was thus a step-by-step process starting in the earliest stage of his career. The first step was the choice-theoretic analysis of firm behaviour. At this stage, Lucas’s focus was on the firm’s investment decision through which he distinguished short-run and long-run reactions of the firm and the industry. The climax of this period is his Adjustment costs and the theory of supply (Lucas, 1966/1967a) that contained the basic supply and-demand framework that Lucas and Rapping (1968/1969a; 1968/1969b; 1970/1972a) shortly extended to labour market modelling – so Lucas’s work with Rapping is rooted in his earlier record in firm microeconomics. As they assumed, the household decides on short-run labour supply on the basis of a given set of price and wage expectations, while it adjusts to long-run changes with a firm-like investment decision that implies the revision of expectations.
After this second step taken in labour market modelling, the third stage realizing his Expectations and the neutrality of money (Lucas, 1970/1972a) directly followed – although the complexity of influences renders the connection with the previous phase subtle (Lucas, 2001, p. 21). His monetary island model was a ‘spin-off’ from his work with Rapping (Lucas’s letter to Edmund S. Phelps, November 7, 1969. Box 1A, Folder ‘1969’), but the paper may be more appropriately regarded as a spin-off from the related impressions Lucas received. First of all, he needed the very island-model framework. It is Phelps (1970, pp. 6–9) who called his attention to the option of reformulating the decision problem by scattering the agents over isolated markets, while it is Cass who led Lucas to a correct mathematical exposition. However, it is Prescott who in their collaboration prepared Lucas for this exposition; and it is also Prescott who, teamed up with Lucas, provided the paradigmatic example of applying the Muthian rational expectations hypothesis in a stochastic setting with which Lucas (1966/1981b) had formerly dealt only in the less interesting non-stochastic case. As the present paper argued, Lucas’s monetary island model is thus the unification of the impressions Lucas gained under his graduate studies at the University of Chicago, and later from Rapping, Phelps, Cass and Prescott under his years at Carnegie Mellon University
Here’s the full Table of Contents for the symposium, which requires a subscription to access:
I have been the Managing Editor of the Journal of Economic Perspectives since the first issue in Summer 1987. The JEP is published by the American Economic Association, which decided about a decade ago–to my delight–that the journal would be freely available on-line, from the current issue all the way back to the first issue. You can download individual articles or entire issues, and it is available in various e-reader formats, too. Here, I’ll start with the Table of Contents for the just-released Spring 2022 issue, which in the Taylor household is known as issue #140. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the few weeks, as well.
___________________
Symposium on Macro Policy in the Pandemic
“A Social Insurance Perspective on Pandemic Fiscal Policy: Implications for Unemployment Insurance and Hazard Pay,” by Christina D. Romer and David H. Romer
This paper considers fiscal policy during the pandemic through the lens of optimal social insurance. We develop a simple framework to analyze how government taxes and transfers could mimic the insurance that people would like to have had against pandemic income losses. Permutations of the framework provide insight into how unemployment insurance should be structured, when and how much hazard pay is called for, and whether fiscal policy should aim just to redistribute income or also to stimulate aggregate demand during a pandemic. When we use the insights from the model to evaluate unemployment insurance measures taken during the pandemic, we find that some, but far from all, of the implications of the social insurance framework were followed. In the case of hazard pay, we find that the proposal for a national program (the never-implemented HEROES Act) was both broader and more generous than a social insurance perspective would call for. We suggest that the social insurance perspective on fiscal policy is likely to become increasingly relevant as pandemics and climate-related natural disasters become more common causes of unemployment and recessions.
“Should We Insure Workers or Jobs during Recessions?” by Giulia Giupponi, Camille Landais and Alice Lapeyre
What is the most efficient way to respond to recessions in the labor market? To this question, policymakers on the two sides of the pond gave diametrically opposed answers during the COVID-19 crisis. In the United States, the focus was on insuring workers by increasing the generosity of unemployment insurance. In Europe, instead, policies were concentrated on saving jobs, with the expansion of short-time work programs to subsidize labor hoarding. Who got it right? In this article, we show that far from being substitutes, unemployment insurance and short-time work exhibit strong complementarities. They provide insurance to different types of workers and against different types of shocks. Short-time work can be effective at reducing socially costly layoffs against large temporary shocks, but it is less effective against more persistent shocks that require reallocation across firms and sectors. We conclude that short-time work is an important addition to the labor market policy-toolkit during recessions, to be used alongside unemployment insurance.
“The $800 Billion Paycheck Protection Program: Where Did the Money Go and Why Did It Go There?” by David Autor, David Cho, Leland D. Crane, Mita Goldar, Byron Lutz, Joshua Montes, William B. Peterman, David Ratner, Daniel Villar and Ahu Yildirmaz
The Paycheck Protection Program (PPP) provided small businesses with roughly $800 billion dollars in uncollateralized, low-interest loans during the pandemic, almost all of which will be forgiven. With 94 percent of small businesses ultimately receiving one or more loans, the PPP nearly saturated its market in just two months. We estimate that the program cumulatively preserved between 2 and 3 million job-years of employment over 14 months at a cost of $169K to $258K per job-year retained. These numbers imply that only 23 to 34 percent of PPP dollars went directly to workers who would otherwise have lost jobs; the balance flowed to business owners and shareholders, including creditors and suppliers of PPP-receiving firms. Program incidence was ultimately highly regressive, with about three-quarters of PPP funds accruing to the top quintile of households. PPP’s breakneck scale-up, its high cost per job saved, and its regressive incidence have a common origin: PPP was essentially untargeted because the United States lacked the administrative infrastructure to do otherwise. Harnessing modern administrative systems, other high-income countries were able to better target pandemic business aid to firms in financial distress. Building similar capacity in the U.S. would enable improved targeting when the next pandemic or other large-scale economic emergency inevitably arises.
“American Enslavement and the Recovery of Black Economic History,” by Trevon D. Logan
This paper reconsiders the evidence needed to answer pressing questions of economic history and racial inequality, the Third Phase of research on American Enslavement and its Aftermath. First, I briefly summarize how economists have sought to understand slavery as an institution. Second, using my family’s narrative as a lens, I show how answers to questions from economic history and economic theory can be answered by expanding our evidentiary base and methodological approaches. In the process, I highlight some areas of what these “traditional” economic perspectives miss. Finally, I briefly provide some examples from other fields—such as recent work by historians—that have sought to provide texture on some of the key dimensions of slavery and racial inequality that have been under-studied by economists.
“The Cumulative Costs of Racism and the Bill for Black Reparations,” by William Darity Jr., A. Kirsten Mullen and Marvin Slaughter
Two major procedures for establishing the monetary value of a plan for reparations for Black American descendants of US slavery are considered in this paper: 1) Enumeration of atrocities and assignment of a dollar value to each as a prelude to adding up the total, and 2) Identification of a summary measure that captures the dollar amount of the cumulative, intergenerational effects of anti-Black atrocities. Under the first approach, the itemization strategy, we assess wage costs to the enslaved of bondage; financial gains to the perpetrators of slavery; damages to Black victims of post-Civil War white massacres and lynchings; losses from discrimination in the provision of the home buying supports from the Federal Housing Administration and the G.I. Bill; and income penalties due to racial discrimination in employment. Under the second approach, the global indicator strategy, we calculate the present value of providing 40 acres of land to freed slaves in 1865 and the current wealth gap between Black and White Americans. We conclude that the latter standard, the racial wealth gap, provides the best gauge for the size of the bill for Black reparations.
“Slavery and the Rise of the Nineteenth-Century American Economy,” by Gavin Wright
The essay considers the claim that slavery played a leading role in the acceleration of US economic growth in the nineteenth century. Although popular among pro-slavery apologists, the proposition fails under rigorous historical scrutiny. The slave South discouraged immigration, underinvested in transportation infrastructure, and failed to educate the majority of its population. It is not even clear that the region produced more cotton than it would have under a counterfactual alternative settlement by free family farmers, on the free-state pattern. The grain of truth in recently popular narratives is that many northerners and business interests were complicit in the crime of slavery: routinely engaging in transactions with slaveholders, even promoting activities that facilitated slavery and the domestic slave trade. Complicity complicates simple historical moralism, but it is quite different from the notion that the prosperity of the nation as a whole derived from slavery in any fundamental way.
“Children and the US Social Safety Net: Balancing Disincentives for Adults and Benefits for Children,” Anna Aizer, Hilary Hoynes and Adriana Lleras-Muney
Economic research on the safety net has evolved over time, moving away from a focus on the negative incentive effects of means-tested assistance on employment, earnings, marriage, and fertility to include the potential positive benefits of such programs to children. Initially, this research on benefits to children focused on short-run impacts, but as we accumulated knowledge about skill production and better data became available, the research evolved further to include important long-run economic outcomes such as employment, earnings, and mortality. Once the positive long-run benefits to children are considered, many safety net programs are cost-effective. However, the current government practice of limiting the time horizon for cost-benefit calculations of policy initiatives often fails to take this into account. Finally, we discuss why child poverty in the United States is still higher than most OECD countries and how research on children and the safety net can better inform policy-making.
“Universal Early-Life Health Policies in the Nordic Countries,” by Miriam Wüst
Given mounting evidence on the negative impact of early-life shocks for the wellbeing of people over the life course, a growing economics literature studies whether early-life policies have symmetric positive effects. This paper zooms in on research on this topic from the Nordic countries, where all families have access to a comprehensive set of early-life health programs, including prenatal, maternity, and well-infant care. I describe this Nordic model of universal early-life health policies and discuss the existing evidence on its causal effects from two categories of studies. First, studying the introduction of universal policies, research has documented important short- and long-run benefits for the health, education, and labor market trajectories of treated cohorts. Second, exploiting modern-day changes to policy design, research for now documents short- and medium-run impacts of universal care on primarily maternal and child health as well as parental investment behaviors. I conclude with directions for future research.
“Inequality in Early Care Experienced by US Children,” by Sarah Flood, Joel McMurry, Aaron Sojourner and Matthew Wiswall
Using multiple datasets on parental and non-parental care provided to children up to age six, we quantify differences in American children’s care experiences by socioeconomic status (SES), proxied primarily with maternal education. Increasingly, higher SES children spend less time with their parents and more time in the care of others. Non-parental care for high-SES children is more likely to be in childcare centers, where average quality is higher, and less likely to be provided by relatives, where average quality is lower. Even within types of childcare, higher-SES children tend to receive care of higher measured quality and higher cost. Inequality is evident at home as well: measures of parental enrichment at home, from both self-reports and outside observers, are on average higher for higher-SES children. Parental and non-parental quality are positively correlated, leading to substantial inequality in the total quality of care received from all sources in early childhood.
“Economics of Foster Care,” by Anthony Bald, Joseph J. Doyle Jr., Max Gross and Brian A. Jacob
Foster care provides substitute living arrangements to protect maltreated children. The practice is remarkably common: it is estimated that 5 percent of children in the United States are placed in foster care at some point during childhood. This paper describes the main tradeoffs in child welfare policy and provides background on policy and practice most in need of rigorous evidence. Trends include efforts to prevent foster care on the demand side and to improve foster home recruitment on the supply side. With increasing data availability and a growing interest in evidence-based practices, there are opportunities for economic research to inform policies that protect vulnerable children.
Retrospectives: “Joan Robinson on Karl Marx: `His Sense of Reality Is Far Stronger,'” by Carolina Alves
This paper revisits why Joan Robinson turned to Karl Marx in 1942 and which insights from Marxian economics she sought to incorporate into her later works, while commenting on how her encounter with Marx was received by some her of contemporaries. By the end of the 1930s, Robinson wanted to bring academic and Marxian economics together in a search for a more realist theory of the rate of profit and income distribution, along with clarifications on Keynes’s concept of full employment and the nature of technical progress and a long-period theory within the Keynesian framework. The result, An Essay on Marxian Economics (1942), was her most important work in terms of laying the foundations of her enduring challenge to the orthodox economics. Here she relied on Marxian insights to escape Marshallian orthodoxy. It is the story of how the originator of imperfect competition pushed further into a theory of exploitation.
Here’s an overview of some of the issues from “Lessons Learned from the Breadth of Economic Policies during the Pandemic,” by Wendy Edelberg, Jason Furman, and Timothy F. Geithner.
The U.S. economy experienced a V-shaped recovery of a type not seen in recent recessions. Real Gross Domestic Product (GDP) exceeded its pre-pandemic level by the second quarter of 2021 and was close to pre-pandemic estimates of potential by the fourth quarter of 2021. The unemployment rate ended 2021 below 4.0 percent, just slightly above where it was two years earlier, prior to the pandemic. …
Overall, the United States’ fiscal response appears to have been much larger than the response undertaken by any other country; this was especially true in 2021, when fiscal policy was as supportive as it was in 2020. The U.S. GDP recovery has been among the strongest of any of the advanced economies, but the U.S. employment recovery has been among the weakest; this suggests that both the size of the response and, perhaps, its character and preexisting institutions all matter. …
The economy experienced major side effects from the pandemic and associated policy response, most notably the highest inflation rate in 40 years, far outpacing the increase in wages and leading to the largest real wage declines in decades. In addition, the U.S. government incurred substantial debt during the pandemic. With the expiration of most forms of fiscal support, real household income is likely to be lower in 2022 than in 2021 and could well be below its pre-pandemic trend. As a result, poverty is on track to rise in 2022. Moreover, inflationary pressures and the efforts to moderate those pressures might bring an end to the expansion.
Ultimately, the economic policy response to the COVID-19 recession should be judged not just by its consequences in the spring of 2020, not what happened over the next two years, but also by the longer-term effects, and whether the response will prove to have contributed to a stronger and more sustainable economy going forward. …
Here is a nonexhaustive list of the lessons I took away from the essays in the book. I’ll list the table of contents for the book below.
1) When the pandemic recession first hit, the effects were severe and there was no good sense of how long it might last. Thus, the priority of economic policy was to go big and fast: in particular, certain policies spent large chunks of money in rebates, stimulus, unemployment insurance, and others. Some of these handed out money in essentially untargeted ways. As one example, the Paycheck Protection Program funneled several hundred billion dollars to businesses with fewer than 500 employees, with the idea that it would protect jobs, but given that it was essentially free money from the government, a lot of it ended up going to the owners of the firms. Economic policy in early 2020 faced a choice between targeting and speed, and mostly chose speed.
2) The early goal of economic policy in March and April 2020 was not really seeking to help the economy recover: it was to help large parts of the economy shut down to minimize the chance of the pandemic spreading, but in a way that tried to support income.
3) The economic recovery from the pandemic happened faster than many people expected. Thus, when Joe Biden took the presidential oath of office in January 2021, there was a widespread sense that additional fiscal stimulus was needed. But the recession had ended in April 2020, and the vaccines had arrived. In fact, the US economy in early 2021 was in a quite different place from a year earlier. In a crisis, there is sometimes a sense that “you can never do too much.” But Continuing and extending federal support payments in 2021, as if it was still 2020, was a mistake and a contributor to the launch of inflation.
4) Compared to EU experience, the job market in the US had a much steeper fall. One reason was that US payments to unemployed workers were very high, sometimes more than 100% of previous wages, while payments in European countries typically replaced about 70-90% of lost income. Second, European countries emphasized “short-time work” policies that are like part-time unemployment. The idea was that instead of having a company lay off some workers completely, the company could reduce the hours of all workers, with the government then making up much the difference in pay. Such policies seek to preserve employer-employee ties, with the idea that such ties make it easier to return to work–and much easier for the employer to require that employees return to their jobs. There are longstanding arguments about the merits of subsidizing workers via unemployment insurance or subsidizing jobs with short-term work programs. There is probably a role for both approaches–but during a short, sharp pandemic shock, short-time work has some real benefits.
5) Near the start of the pandemic recession, there was concern that state and local governments might face severe strains, but the eventual result was more mixed. Louise Sheiner writes:
So, what happened to state and local government revenues, employment, and spending during the first two years of the pandemic? Revenues did not decline nearly as much as had been first feared and federal aid was more than sufficient to offset any revenue losses in every state. Nevertheless, state and local government employment declined sharply, and the decline has been quite persistent: employment by state and local governments in February 2022 was three percent below the January 2020 level. Looked at another way, in February 2022, the state and local sector accounted for 23 percent of the shortfall in U.S. employment from its pre-pandemic trend. … Overall, it seems clear that the employment losses vary a lot by state in ways that cannot fully be explained. … [G]enerous federal aid to states was clearly not sufficient to reverse or prevent all the employment losses. One important question is, why not? What did state and local governments do with the federal aid, and why didn’t they use it to increase employment?
6) The vulnerabilities of the US financial system had played a large role in propagating some recent recessions, including the Great Recession of 2008-2010 and the 1991 recession which had some links to the collapse of the savings and loan industry. But in the pandemic recession, the US banking system performed just fine. A large part of that performance was the rules put into place after the Great Recession on the capital and safety standards that banks needed to meet were effective. The Federal Reserve also played a role in extending short-run credit and making sure that financial markets didn’t freeze in place, especially in March 2020, but overall, the story in the financial sector is the success of the earlier reforms.
The book often returns to the theme that the next recession is likely to come from its own idiosyncratic cause–that is, not from a pandemic–and it is worth thinking about what policies might be put in place now that would kick in automatically when that recession hits. Here’s the table of contents for the book as a whole:
China is taking the lead in moving to a central bank digital currency. It’s not altogether clear how much the US and other high-income countries should be worried about this. Sometimes it’s better to be the one who watches someone else go first, and then learns from their experience. For sorting out the benefits and risks, a useful starting point is Digital Currencies: The US, China, and the World at a Crossroads, edited by Darrell Duffie and Elizabeth Economy based on the discussions of a task force convened at the Hoover Institution.
I’ve described the central bank digital currency controversies before at this blog, but it’s probably useful to review. What we’re talking about here is how payments from one party to another are made behind the scenes–debit cards, credit cards, direct deposit, even old-fashioned paper checks. Duffie and Economy describe how the “bank-railed” systems of the past have worked :
For centuries, the world has relied on banks to handle the vast majority of payments via a straightforward and generally safe method. In the simplest common cases, a bank-railed payment system works like this: Alice pays Bob $100 by instructing her bank to deduct $100 from her bank account and to deposit $100 into Bob’s account at his bank. The instruction can take the form of the tap of a credit or debit card, a wire transfer, or a paper check, among other methods. In some countries, including the United States, the payment medium—bank deposits—is extremely safe, and banks take reasonable care to protect the privacy of their customers and monitor the legality of payments.
As shown in figure 1.1, many countries have been upgrading bankrailed payments by introducing “fast-payment systems,” which can make instant payments possible around the clock, largely eliminating costly delays and payment risks. The United States has a fast-payment system provided by a consortium of large banks. Not satisfied that the bank-provided solution will be sufficient, the US central bank, the Federal Reserve, will introduce its own fast-payment system, FedNow, by 2024.
With this and certain other improvements in traditional payment systems, why are most countries now considering radically disrupting their bank-railed payment systems by introducing CBDCs, or by accommodating other kinds of digital currencies? The answer is that most central banks have begun to question whether merely upgrading their bank-railed payment systems will be enough to meet the challenges of the future digital economy. They have also begun to consider whether to encourage, and how to regulate, private sector fintech innovations such as stablecoins. Moreover, some in the official sector are concerned about whether banks face sufficient competition for providing innovative and cost-efficient payment services.
How would a central bank digital currency work differently? Duffie and Economy explain:
Often in response to private fintech innovations or the declining use of paper money, some central banks are developing CBDCs. A CBDC is a deposit in the central bank that can be used to make payments. For example, Alice can pay Bob $100 by shifting $100 out of her central bank account and into Bob’s central bank account, whether on an internet website, a mobile phone app, or a payment smart card, among other methods. Depending on their designs, CBDCs can also be used for offline payments, meaning without access to the internet or a phone network. In many cases, Alice and Bob would obtain their CBDC and the necessary application software (“apps”) from private sector firms such as banks, even though the CBDC itself is a claim against the central bank. A general purpose CBDC, often called a “retail” CBDC, would be available to anyone and accepted by anyone, much like paper currency but allowing for greater efficiencies and a wider range of uses. Special-purpose CBDCs can also improve the efficiency of wholesale financial transaction settlements and cross-border payments. …
Most CBDCs currently being developed adopt a hybrid model, according to which the central bank issues the CBDC to banks and other payment service providers, which in turn distribute the CBDC to users throughout the economy and provide them with account-related services.
In some ways, this doesn’t sound like much of a change. It sounds as if payments would still go through banks, but now, behind the scenes, the accounts would be settled with the CBDC. How does this approach provide any gains?
The short-term gain for US consumers is that payments could be faster and cheaper. Duffie and Economy write:
North Americans pay over 2 percent of their GDP for payment services, according to data from McKinsey, more than most of the rest of the world pays, particularly because of extremely high fees for credit cards. US payments are also processed and cleared slowly, often taking more than a day before they can be used by the recipient. And Americans’ primary payment instrument, bank deposits, is compensated with very low interest rates relative to wholesale money-market rates.
The longer-term benefit has to do with financial innovation and competition. Say that we shift away from a “bank-railed” system, where financial transactions take place between banks, and that other financial technology companies would also be able to have a CBDC account at the central bank. A number of US companies are among the innovators in payment systems. Duffie and Economy mention “Arbitrum, Avanti Bank, Betterfin, Celo, Chime, Circle (USD Coin), Coinbase, Diem, Electric Capital, Imperial PFS, Jiko, JP Morgan, Mobile Coin, Optimism, Paxos, Plaid, Polygon, R3, Ripple, SoFi, Stellar, Topl, Varo Bank, Venmo, Yodlee, and Zelle.” But whatever services these firms offer, in the US economy they are ultimately, behind the scenes, operating through banks. If they instead could have CBDC accounts at the central bank, new types of financial communication could be unlocked.
Duffie and Economy emphasize that when it comes to financial technology and payments technology, large Chinese firms have taken the lead: “In China, for example, 94 percent of mobile payments are now processed by Alipay and WeChat Pay, with 90 percent of residents of China’s largest cities using these services as their primary method of payment … Building on Alipay, the Ant Group provides relatively low-cost and widely accessible small-business credit, wealth management, and insurance. Alipay now reaches small-tier cities and many rural areas in China.” In addition, China has been experimenting in major cities with a new central bank digital currency, the e-CNY, and plans to launch it more broadly later this year.
China’s public posture is that the e-CNY is really just for domestic use. But one potential concern for the United States is that the system would, at least in concept, allow international payments as well in a way that would circumvent the SWIFT (Society for Worldwide Interbank Financial Telecommunications) system that is now the standard coordinator for international financial payments–and also a primary tool for imposing financial sanctions on other countries.
I’ve written about the risks of a CBDC in the past, and won’t repeat it all here. Might such a system pose risks to conventional banks? If conventional banks face standard financial regulation, with its costs and requirements, what regulations are appropriate for non-banks that would have a pipeline into their own account at the Fed? For example, banks face “know-your-customer” rules aimed at limiting money-laundering or financing other illegal activities. Would all the non-banks need to abide by similar rules? If not, do these nonbank financial firms create a risk of financial instability? A CBDC based on US dollars would be one of the preeminent targets for computer hackers all over the world. How would a CBDC be related to cryptocurrencies and blockchain-related innovations? What degree of privacy would a CBDC involve? In China, there doesn’t seem to be much of a problem with the idea that the central bank would oversee all the accounts in this system: in the US and other high-income countries, such a step might be more controversial.
Finally, remember that these non-bank financial firms aren’t necessarily just about payments. They might also offer loans, or assurances of kinds of contractual financial payments. They might offer insurance or investment options.
I’m underconfident that the Federal Reserve is ready to launch a central bank digital currency, and US financial markets and innovation are rather different from those in China. But China’s experiment with launching the e-CNY is worth watching.
Oleg Itskhoki was just awarded the John Bates Clark medal, given each year by the American Economic Association “to that American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge.” It’s sometimes called the “baby Nobel,” because when the recipients get close to or enter retirement someday, they will often be among the top contenders for a Nobel prize in economics. For those interested in knowing more about Itskhoki’s work in international finance and trade, he offers a readable overview of one slice of his work on “Dominant Currencies” in the most recent NBER Reporter(March 2022). He writes:
While the US dollar accounts for a disproportionate share of international trade, there is a small subset of currencies that are actively used in this trade alongside the dollar, most notably the euro, but to a lesser extent the pound, the Japanese yen, the Swiss franc, and the Chinese yuan. In some bilateral trade flows these currencies play as important a role as the dollar [see Figure 1], with considerable variation in currency use across individual firms even within narrowly defined industries. The dollar and the euro have emerged as the two leading currencies in accounting for international trade flows, with the role of the euro elevated by the fact that a large portion of international trade happens among European countries or involves one of the European countries. A distinctive feature of dominant currencies is that the same currency is equally prevalent in both imports and exports, a feature common to both the dollar and the euro, which is also at odds with standard international macro models that assume a greater role for many currencies to be present in global trade. Nonetheless, a clear distinction between the dollar and the euro is that the dollar in many cases is also a vehicle currency, not used domestically by either the importing or the exporting country. One can thus think of the dollar as the dominant global currency and of the euro as the dominant regional currency,
Here’s the Figure 1 mentioned in the above quotation, which focuses on Belgium. Belgium is a smallish economy in Europe (its GDP is about the same size as the American state of Michigan). It uses the euro as its currency and is highly integrated into the economy of Europe and the global economy. The export/GDP and import/GDP ratios for Belgium are about 80%; for comparison, the comparable export and import ratios for the US economy are in the range of 12-15% in recent years.
The figure shows the use of the US dollar and the euro for Belgium’s international trade with different countries. The left-hand panel looks at Belgium’s exports. As you can see, Belgium’s exports to the US are mostly invoiced in US dollars. Belgium’s exports to India are about 40% US dollar and 40% euro. Belgium’s exports to Japan are about 30% euro and 15% US dollar–with the rest of those imports probably being invoiced in Japanese yen.
With this kind of data for many countries, a researcher can study the determinants of what currency gets used for what reasons, and thus get insights into why the US dollar and a few others are dominant currencies–and what the effects would be of shifts in dominant currencies. Itskhoki lays out the theories of currency choice, which are based on the canonical three functions fulfilled by money.
Medium-of-exchange theories emphasize that a currency is adopted if it guarantees the lowest transaction costs or maximizes room for mutually beneficial exchange. These theories stress country size as a fundamental force, as well as the likelihood of multiple coordination equilibria and other macroeconomic factors that make it too costly to use currencies of developing countries … Store-of-value theories link currency choice in exports with the currency of financing of the firm as part of a combined risk-management decision. Finally, unit-of-account theories postulate that a price is set in a given currency and is not adjusted in the short run …
One basic insight here is that firms producing in one country and selling in another must figure out how to deal with a risk of shifting exchange rates–and the currency for a given transaction will be chosen with this issue in mind. For example, imagine a Belgian firm that imports lots of inputs from the US, and then exports back to the US. For that firm, using the US dollar to invoice both its imports and exports means that it is protected from fluctuations in the exchange rate of the US dollar. Similarly, it turns out that firms with cross-border ownership are more likely to invoice in US dollars. Most real-world examples are more complex that this, of course, but the decision about currency ends up involving the extent to which higher costs incurred in one currency can be passed through to a buyer in another currency.
One of the questions I am asked repeatedly is when or whether the Chinese renminbi yuan will become the world’s dominant currency. Such shifts of dominant currency have historically happened only slowly and occasionally. But here’s Itskhoki on how shifting conditions could foster such a change:
One possibility is that the US dollar strengthens its position as the dominant global currency. This could happen with greater globalization of production and more intensive reliance on global value chains; our results show that cross-border foreign direct investment — a proxy for global value chains — is associated with more US dollar currency invoicing. This would render exchange rates less relevant as determinants of relative prices and expenditure switching in the global supply chain. In contrast, fragmentation and localization of production chains, which might happen in response to a global pandemic shock, can reverse this trend and speed up the transition to a multicurrency equilibrium, with more intensive regional trade and greater barriers to cross-regional trade. This, in turn, may increase the expenditure-switching role of bilateral exchange rate movements.
Alternatively, a shift in the exchange rate anchoring policies of the major trade partners, such as China, could trigger a long-run shift in the equilibrium environment. If China were to freely float its exchange rate, encouraging Chinese exporters to price more intensively in renminbi, then the equilibrium environment would change for exporting firms around the world. In particular, this would alter both the dynamics of prices in the input markets as well as the competitive environment in the output markets across many industries. As our results show, the currency in which a firm’s imports are invoiced and the currency in which its competitors price are key determinants of an exporting firm’s currency choice, and hence this shift could dramatically change the optimal invoicing patterns for exporting firms.