The Pandemic Response: Policy Lessons

The actual economic recession connected with the COVID pandemic turned out to be extremely short, lasting only during March and April 2020. Of course, the dislocations and restrictions associated with the pandemic in areas like health, jobs, sectors of the economy, online education, and travel have continued in various forms since then. But focusing on the economic issues, what have we learned? Recession Remedies: Lessons Learned from the U.S. Economic Policy Response to COVID-19, edited by Wendy Edelberg, Louise Sheiner, and
David Wessel
and freely available online, provides nine essays on different aspects of the economic policy response.

Here’s an overview of some of the issues from “Lessons Learned from the Breadth
of Economic Policies during the Pandemic,” by Wendy Edelberg, Jason Furman, and Timothy F. Geithner.

The U.S. economy experienced a V-shaped recovery of a type not seen in recent recessions. Real Gross Domestic Product (GDP) exceeded its pre-pandemic level by the second quarter of 2021 and was close to pre-pandemic estimates of potential by the fourth quarter of 2021. The unemployment rate ended 2021 below 4.0 percent, just slightly above where it was two years earlier, prior to the pandemic. …

Overall, the United States’ fiscal response appears to have been much larger
than the response undertaken by any other country; this was especially true in
2021, when fiscal policy was as supportive as it was in 2020. The U.S. GDP recovery
has been among the strongest of any of the advanced economies, but the U.S.
employment recovery has been among the weakest; this suggests that both the size
of the response and, perhaps, its character and preexisting institutions all matter. …

The economy experienced major side effects from the pandemic and associated
policy response, most notably the highest inflation rate in 40 years, far
outpacing the increase in wages and leading to the largest real wage declines in
decades. In addition, the U.S. government incurred substantial debt during the
pandemic. With the expiration of most forms of fiscal support, real household
income is likely to be lower in 2022 than in 2021 and could well be below its
pre-pandemic trend. As a result, poverty is on track to rise in 2022. Moreover,
inflationary pressures and the efforts to moderate those pressures might bring
an end to the expansion.

Ultimately, the economic policy response to the COVID-19 recession should
be judged not just by its consequences in the spring of 2020, not what happened
over the next two years, but also by the longer-term effects, and whether the
response will prove to have contributed to a stronger and more sustainable
economy going forward. …

Here is a nonexhaustive list of the lessons I took away from the essays in the book. I’ll list the table of contents for the book below.

1) When the pandemic recession first hit, the effects were severe and there was no good sense of how long it might last. Thus, the priority of economic policy was to go big and fast: in particular, certain policies spent large chunks of money in rebates, stimulus, unemployment insurance, and others. Some of these handed out money in essentially untargeted ways. As one example, the Paycheck Protection Program funneled several hundred billion dollars to businesses with fewer than 500 employees, with the idea that it would protect jobs, but given that it was essentially free money from the government, a lot of it ended up going to the owners of the firms. Economic policy in early 2020 faced a choice between targeting and speed, and mostly chose speed.

2) The early goal of economic policy in March and April 2020 was not really seeking to help the economy recover: it was to help large parts of the economy shut down to minimize the chance of the pandemic spreading, but in a way that tried to support income.

3) The economic recovery from the pandemic happened faster than many people expected. Thus, when Joe Biden took the presidential oath of office in January 2021, there was a widespread sense that additional fiscal stimulus was needed. But the recession had ended in April 2020, and the vaccines had arrived. In fact, the US economy in early 2021 was in a quite different place from a year earlier. In a crisis, there is sometimes a sense that “you can never do too much.” But Continuing and extending federal support payments in 2021, as if it was still 2020, was a mistake and a contributor to the launch of inflation.

4) Compared to EU experience, the job market in the US had a much steeper fall. One reason was that US payments to unemployed workers were very high, sometimes more than 100% of previous wages, while payments in European countries typically replaced about 70-90% of lost income. Second, European countries emphasized “short-time work” policies that are like part-time unemployment. The idea was that instead of having a company lay off some workers completely, the company could reduce the hours of all workers, with the government then making up much the difference in pay. Such policies seek to preserve employer-employee ties, with the idea that such ties make it easier to return to work–and much easier for the employer to require that employees return to their jobs. There are longstanding arguments about the merits of subsidizing workers via unemployment insurance or subsidizing jobs with short-term work programs. There is probably a role for both approaches–but during a short, sharp pandemic shock, short-time work has some real benefits.

5) Near the start of the pandemic recession, there was concern that state and local governments might face severe strains, but the eventual result was more mixed. Louise Sheiner writes:

So, what happened to state and local government revenues, employment, and spending during the first two years of the pandemic? Revenues did not decline nearly as much as had been first feared and federal aid was more than sufficient to offset any revenue losses in every state. Nevertheless, state and local government employment declined sharply, and the decline has been quite persistent: employment by state and local governments in February 2022 was three percent below the January 2020 level. Looked at another way, in February 2022, the state and local sector accounted for 23 percent of the shortfall in
U.S. employment from its pre-pandemic trend. … Overall, it seems clear that the employment losses vary a lot by state in ways that cannot fully be explained. … [G]enerous
federal aid to states was clearly not sufficient to reverse or prevent all the employment losses. One important question is, why not? What did state and local governments do with the federal aid, and why didn’t they use it to increase employment?

6) The vulnerabilities of the US financial system had played a large role in propagating some recent recessions, including the Great Recession of 2008-2010 and the 1991 recession which had some links to the collapse of the savings and loan industry. But in the pandemic recession, the US banking system performed just fine. A large part of that performance was the rules put into place after the Great Recession on the capital and safety standards that banks needed to meet were effective. The Federal Reserve also played a role in extending short-run credit and making sure that financial markets didn’t freeze in place, especially in March 2020, but overall, the story in the financial sector is the success of the earlier reforms.

The book often returns to the theme that the next recession is likely to come from its own idiosyncratic cause–that is, not from a pandemic–and it is worth thinking about what policies might be put in place now that would kick in automatically when that recession hits. Here’s the table of contents for the book as a whole:

/

China’s Move to a Central Bank Digital Currency

China is taking the lead in moving to a central bank digital currency. It’s not altogether clear how much the US and other high-income countries should be worried about this. Sometimes it’s better to be the one who watches someone else go first, and then learns from their experience. For sorting out the benefits and risks, a useful starting point is Digital Currencies: The US, China, and the World at a Crossroads, edited by Darrell Duffie and Elizabeth Economy based on the discussions of a task force convened at the Hoover Institution.

I’ve described the central bank digital currency controversies before at this blog, but it’s probably useful to review. What we’re talking about here is how payments from one party to another are made behind the scenes–debit cards, credit cards, direct deposit, even old-fashioned paper checks. Duffie and Economy describe how the “bank-railed” systems of the past have worked :

For centuries, the world has relied on banks to handle the vast majority of payments via a straightforward and generally safe method. In the simplest common cases, a bank-railed payment system works like this: Alice pays Bob $100 by instructing her bank to deduct $100 from her bank account and to deposit $100 into Bob’s account at his bank. The instruction can take the form of the tap of a credit or debit card, a wire transfer, or a paper check, among other methods. In some countries, including the United States, the payment medium—bank deposits—is extremely safe, and banks take reasonable care to protect the privacy of their customers and monitor the legality of payments.

As shown in figure 1.1, many countries have been upgrading bankrailed payments by introducing “fast-payment systems,” which can make instant payments possible around the clock, largely eliminating costly delays and payment risks. The United States has a fast-payment system provided by a consortium of large banks. Not satisfied that the bank-provided solution will be sufficient, the US central bank, the Federal Reserve, will introduce its own fast-payment system, FedNow, by 2024.

With this and certain other improvements in traditional payment systems, why are most countries now considering radically disrupting their bank-railed payment systems by introducing CBDCs, or by accommodating other kinds of digital currencies? The answer is that most central banks have begun to question whether merely upgrading their bank-railed payment systems will be enough to meet the challenges of the future digital economy. They have also begun to consider whether to encourage, and how to regulate, private sector fintech innovations such as stablecoins. Moreover, some in the official sector are concerned about whether banks face sufficient competition for providing innovative and cost-efficient payment services.

How would a central bank digital currency work differently? Duffie and Economy explain:

Often in response to private fintech innovations or the declining use of paper money, some central banks are developing CBDCs. A CBDC is a deposit in the central bank that can be used to make payments. For example, Alice can pay Bob $100 by shifting $100 out of her central bank account and into Bob’s central bank account, whether on an internet website, a mobile phone app, or a payment smart card, among other methods. Depending on their designs, CBDCs can also be used for offline payments, meaning without access to the internet or a phone network. In many cases, Alice and Bob would obtain their CBDC and the necessary application software (“apps”) from private sector firms such as banks,
even though the CBDC itself is a claim against the central bank. A general purpose CBDC, often called a “retail” CBDC, would be available to anyone and accepted by anyone, much like paper currency but allowing for greater efficiencies and a wider range of uses. Special-purpose CBDCs can also improve the efficiency of wholesale financial transaction settlements and cross-border payments. …

Most CBDCs currently being developed adopt a hybrid model, according to which the central bank issues the CBDC to banks and other payment service providers, which in turn distribute the CBDC to users throughout the economy and provide them with account-related services.

In some ways, this doesn’t sound like much of a change. It sounds as if payments would still go through banks, but now, behind the scenes, the accounts would be settled with the CBDC. How does this approach provide any gains?

The short-term gain for US consumers is that payments could be faster and cheaper. Duffie and Economy write:

North Americans pay over 2 percent of their GDP for payment services, according to data from McKinsey, more than most of the rest of the world pays, particularly because of extremely high fees for credit cards. US payments are also processed and cleared slowly, often taking more than a day before they can be used by the recipient. And Americans’ primary payment instrument, bank deposits, is compensated with very low interest rates relative to wholesale money-market rates.

The longer-term benefit has to do with financial innovation and competition. Say that we shift away from a “bank-railed” system, where financial transactions take place between banks, and that other financial technology companies would also be able to have a CBDC account at the central bank. A number of US companies are among the innovators in payment systems. Duffie and Economy mention “Arbitrum, Avanti Bank, Betterfin, Celo, Chime, Circle (USD Coin), Coinbase, Diem, Electric Capital, Imperial PFS, Jiko, JP Morgan, Mobile Coin, Optimism, Paxos, Plaid, Polygon, R3, Ripple, SoFi, Stellar, Topl, Varo Bank, Venmo, Yodlee, and Zelle.” But whatever services these firms offer, in the US economy they are ultimately, behind the scenes, operating through banks. If they instead could have CBDC accounts at the central bank, new types of financial communication could be unlocked.

Duffie and Economy emphasize that when it comes to financial technology and payments technology, large Chinese firms have taken the lead: “In China, for example, 94 percent of mobile payments are now processed by Alipay and WeChat Pay, with 90 percent of residents
of China’s largest cities using these services as their primary method of payment … Building on Alipay, the Ant Group provides relatively low-cost and widely accessible small-business credit, wealth management, and insurance. Alipay now reaches small-tier cities and many
rural areas in China.” In addition, China has been experimenting in major cities with a new central bank digital currency, the e-CNY, and plans to launch it more broadly later this year.

China’s public posture is that the e-CNY is really just for domestic use. But one potential concern for the United States is that the system would, at least in concept, allow international payments as well in a way that would circumvent the SWIFT (Society for Worldwide Interbank Financial Telecommunications) system that is now the standard coordinator for international financial payments–and also a primary tool for imposing financial sanctions on other countries.

I’ve written about the risks of a CBDC in the past, and won’t repeat it all here. Might such a system pose risks to conventional banks? If conventional banks face standard financial regulation, with its costs and requirements, what regulations are appropriate for non-banks that would have a pipeline into their own account at the Fed? For example, banks face “know-your-customer” rules aimed at limiting money-laundering or financing other illegal activities. Would all the non-banks need to abide by similar rules? If not, do these nonbank financial firms create a risk of financial instability? A CBDC based on US dollars would be one of the preeminent targets for computer hackers all over the world. How would a CBDC be related to cryptocurrencies and blockchain-related innovations? What degree of privacy would a CBDC involve? In China, there doesn’t seem to be much of a problem with the idea that the central bank would oversee all the accounts in this system: in the US and other high-income countries, such a step might be more controversial.

Finally, remember that these non-bank financial firms aren’t necessarily just about payments. They might also offer loans, or assurances of kinds of contractual financial payments. They might offer insurance or investment options.

I’m underconfident that the Federal Reserve is ready to launch a central bank digital currency, and US financial markets and innovation are rather different from those in China. But China’s experiment with launching the e-CNY is worth watching.

Some Economics of Dominant Currencies

Oleg Itskhoki was just awarded the John Bates Clark medal, given each year by the American Economic Association “to that American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge.” It’s sometimes called the “baby Nobel,” because when the recipients get close to or enter retirement someday, they will often be among the top contenders for a Nobel prize in economics. For those interested in knowing more about Itskhoki’s work in international finance and trade, he offers a readable overview of one slice of his work on “Dominant Currencies” in the most recent NBER Reporter (March 2022). He writes:

While the US dollar accounts for a disproportionate share of international trade, there is a small subset of currencies that are actively used in this trade alongside the dollar, most notably the euro, but to a lesser extent the pound, the Japanese yen, the Swiss franc, and the Chinese yuan. In some bilateral trade flows these currencies play as important a role as the dollar [see Figure 1], with considerable variation in currency use across individual firms even within narrowly defined industries. The dollar and the euro have emerged as the two leading currencies in accounting for international trade flows, with the role of the euro elevated by the fact that a large portion of international trade happens among European countries or involves one of the European countries. A distinctive feature of dominant currencies is that the same currency is equally prevalent in both imports and exports, a feature common to both the dollar and the euro, which is also at odds with standard international macro models that assume a greater role for many currencies to be present in global trade. Nonetheless, a clear distinction between the dollar and the euro is that the dollar in many cases is also a vehicle currency, not used domestically by either the importing or the exporting country. One can thus think of the dollar as the dominant global currency and of the euro as the dominant regional currency,

Here’s the Figure 1 mentioned in the above quotation, which focuses on Belgium. Belgium is a smallish economy in Europe (its GDP is about the same size as the American state of Michigan). It uses the euro as its currency and is highly integrated into the economy of Europe and the global economy. The export/GDP and import/GDP ratios for Belgium are about 80%; for comparison, the comparable export and import ratios for the US economy are in the range of 12-15% in recent years.

The figure shows the use of the US dollar and the euro for Belgium’s international trade with different countries. The left-hand panel looks at Belgium’s exports. As you can see, Belgium’s exports to the US are mostly invoiced in US dollars. Belgium’s exports to India are about 40% US dollar and 40% euro. Belgium’s exports to Japan are about 30% euro and 15% US dollar–with the rest of those imports probably being invoiced in Japanese yen.

With this kind of data for many countries, a researcher can study the determinants of what currency gets used for what reasons, and thus get insights into why the US dollar and a few others are dominant currencies–and what the effects would be of shifts in dominant currencies. Itskhoki lays out the theories of currency choice, which are based on the canonical three functions fulfilled by money.

Medium-of-exchange theories emphasize that a currency is adopted if it guarantees the lowest transaction costs or maximizes room for mutually beneficial exchange. These theories stress country size as a fundamental force, as well as the likelihood of multiple coordination equilibria and other macroeconomic factors that make it too costly to use currencies of developing countries … Store-of-value theories link currency choice in exports with the currency of financing of the firm as part of a combined risk-management decision. Finally, unit-of-account theories postulate that a price is set in a given currency and is not adjusted in the short run …

One basic insight here is that firms producing in one country and selling in another must figure out how to deal with a risk of shifting exchange rates–and the currency for a given transaction will be chosen with this issue in mind. For example, imagine a Belgian firm that imports lots of inputs from the US, and then exports back to the US. For that firm, using the US dollar to invoice both its imports and exports means that it is protected from fluctuations in the exchange rate of the US dollar. Similarly, it turns out that firms with cross-border ownership are more likely to invoice in US dollars. Most real-world examples are more complex that this, of course, but the decision about currency ends up involving the extent to which higher costs incurred in one currency can be passed through to a buyer in another currency.

One of the questions I am asked repeatedly is when or whether the Chinese renminbi yuan will become the world’s dominant currency. Such shifts of dominant currency have historically happened only slowly and occasionally. But here’s Itskhoki on how shifting conditions could foster such a change:

One possibility is that the US dollar strengthens its position as the dominant global currency. This could happen with greater globalization of production and more intensive reliance on global value chains; our results show that cross-border foreign direct investment — a proxy for global value chains — is associated with more US dollar currency invoicing. This would render exchange rates less relevant as determinants of relative prices and expenditure switching in the global supply chain. In contrast, fragmentation and localization of production chains, which might happen in response to a global pandemic shock, can reverse this trend and speed up the transition to a multicurrency equilibrium, with more intensive regional trade and greater barriers to cross-regional trade. This, in turn, may increase the expenditure-switching role of bilateral exchange rate movements.

Alternatively, a shift in the exchange rate anchoring policies of the major trade partners, such as China, could trigger a long-run shift in the equilibrium environment. If China were to freely float its exchange rate, encouraging Chinese exporters to price more intensively in renminbi, then the equilibrium environment would change for exporting firms around the world. In particular, this would alter both the dynamics of prices in the input markets as well as the competitive environment in the output markets across many industries. As our results show, the currency in which a firm’s imports are invoiced and the currency in which its competitors price are key determinants of an exporting firm’s currency choice, and hence this shift could dramatically change the optimal invoicing patterns for exporting firms.

Universal Schooling in Developing Countries

There are no examples of countries with generally low levels of educational achievement that are also high-income nations; conversely, there are many examples of countries where the level of educational achievement first increased substantially and then was followed by economic growth. Eric A. Hanushek and Ludger Woessman describe their recent research on basic skills around the world in “The Basic Skills Gap” (Finance & Development, June 2022).

Six stylized facts summarize the development challenges presented by global deficits in basic skills:

1. At least two-thirds of the world’s youth do not obtain basic skills.

2. The share of young people who do not reach basic skills exceeds half in 101 countries and rises above 90 percent in 37 of these.

3. Even in high-income countries, a quarter of young people lack basic skills.

4. Skill deficits reach 94 percent in sub-Saharan Africa and 90 percent in south Asia, but they also hit 70 percent in Middle East and North Africa and 66 percent in Latin America.

5. While skill gaps are most apparent for the third of global youth not attending secondary school, fully 62 percent of the world’s secondary school students fail to reach basic skills.

6. Half of the world’s young people live in the 35 countries that do not participate in international testing, resulting in a lack of regular foundational performance information.

They write: “The key to improvement is an unwavering focus on the policy goal: improving student achievement. There is no obvious panacea, and effective policies may differ according to context. But the evidence points to the importance of focusing on incentives related to educational outcomes, which is best achieved through the institutional structures of the school system. Notably, education policies that develop effective accountability systems, promote choice, emphasize teacher quality, and provide direct rewards for good performance offer promise, supported by evidence.”

But what specific strategies might be most useful in addressing these issues in developing countries? Justin Sandefur has edited an e-book for the Center for Global Development, including six essays with comments, entitled Schooling for All: Feasible Strategies to Achieve Universal Education.

Around the world, many countries have achieved a substantial increase in primary school enrollment, but only a very modest increase in secondary school enrollment. In Chapter 3, Lee Crawfurd and Aisha Ali make “The Case for Free Secondary Education.” A lot of their proposal comes down the basics: more teachers and more schools. In many developing countries, students must pass an entrance examination before attending secondary school, and if they pass the exam, they then need to pay fees. Here’s the result:

A common concern–and one of the justifications for charging fees in secondary schools–is that many children from lowest-income families still face considerable barriers to success in primary school. Thus, free secondary school could tend to be a regressive program, benefiting mainly children from higher-income families. Thus, a challenge lurking in the background is how to support children from the lowest-income families in their primary education, so they are not already far behind as early as third or fourth grade.

Lee Crawfurd and Alexis Le Nestour look at the evidence on “How Much Should
Governments Spend on Teachers?” They conclude that although hiring more teachers will be necessary if schooling is to expand, there isn’t much evidence that higher pay for the existing teachers will make a substantial difference in the performance of the lowest-income children in the early grades. In that spirit, the notion of feeding children in school comes up in several of the essays, and is the focus of “Chapter 2: Feed All the Kids,” by Biniam Bedasso. He writes:

School feeding programs have emerged as one of the most common social policy interventions in a wide range of developing countries over the past few decades. Before the disruptions caused by the COVID-19 pandemic, nearly half the world’s schoolchildren,
about 388 million, received a meal at school every day (WFP 2020). As such, school feeding is regarded as the most ubiquitous instrument of social protection in the world employed by developing and developed countries alike. But school feeding is also a human capital
development tool. The theory of change for school feeding programs is rooted in the synergistic relationship between childhood nutrition, health, and education
underscored in the integrated human capital approach (Schultz 1997). The stock of human capital acquired as an adult—a key determinant of productivity— is supposed to be a function of the individual and interactive effects of schooling, nutrition, health,
and mobility. … A survey of government officials in 41 Sub-Saharan Africa countries conducted by the Food and Agriculture Organization of the United Nations (FAO) in 2016 shows that a great majority of the countries implement school feeding programs to help achieve education objectives. …

A review of 11 experimental and quasi-experimental studies from low- and middle-income countries reveals that school feeding contributes to better learning outcomes at the same time as it keeps vulnerable children in school and improves gender equity in education. Although school feeding might appear cost-ineffective compared with specialized education or social protection interventions, the economies of scope it generates are likely to make it a worthwhile investment particularly in food-insecure areas.

In short, Bedasso argues that feeding children at school in developing countries probably pays off just in terms of educational outcomes. But even if the payoff just in terms of education isn’t exceptionally high, the payoff to improved child nutrition in general takes many forms, including improved health and gender equity.

The case for feeding children at school seems strong to me. But it’s only one part of addressing the problem. Many developing countries have dramatically increased primary school enrollments, but they are not yet assuring that most children can keep up and actualy achieve basic literacy and numeracy at the primary school level.

A related problem is that the money for this broader agenda is lacking. Jack Rossiter contributes Chapter 6, which is titled “Finance: Ambition Meets Reality. He looks at the costs of universal primary and secondary school spending, along with school meals, and concludes that the cost would be about $1.9 trillion for low- and middle-income countries in 2030. However, the projected education spending for these countries is about $750 billion less. Outside foreign aid might conceivably fill $50 billion of the gap, but probably not more. Rossiter makes the grim case:

Even if international financing comes in line to meet targets, governments are not going to have anything like the sums that costing exercises require. We can choose to ignore this shortfall, stick with plans, and watch costs creep up. Or we can see it as a serious budget constraint, redirect our attention toward finding ways to push costs down, and try hard to get close to universal access in the next decade.

It’s of course tempting to elide these tradeoffs. Maybe if we just root out waste, fraud, and abuse, we will have all the funds we need? Doubtful. As Rossiter points out, universal access by 2030 may require scaling back on the nice-to-haves–like smaller class sizes, higher pay for teachers, new classroom materials, and so on–and being quite hard-headed about the must-haves.

Was Primitive Communism Ever Real?

There’s an image many of us carry around in our minds, an image of a primitive time when small groups of people lived together and shared equally. Manvir Singh describes the evidence in “Primitive communism: Marx’s idea that societies were naturally egalitarian and communal before farming is widely influential and quite wrong” (Aeon, April 19, 2022). He writes:

The idea goes like this. Once upon a time, private property was unknown. Food went to those in need. Everyone was cared for. Then agriculture arose and, with it, ownership over land, labour and wild resources. The organic community splintered under the weight of competition. The story predates Marx and Engels. The patron saint of capitalism, Adam Smith, proposed something similar, as did the 19th-century American anthropologist Lewis Henry Morgan. Even ancient Buddhist texts described a pre-state society free of property. … Today, many writers and academics still treat primitive communism as a historical fact. …

Primitive communism is appealing. It endorses an Edenic image of humanity, one in which modernity has corrupted our natural goodness. But this is precisely why we should question it. If a century and a half of research on humanity has taught us anything, it is to be sceptical of the seductive. From race science to the noble savage, the history of anthropology is cluttered with the corpses of convenient stories, of narratives that misrepresent human diversity to advance ideological aims. Is primitive communism any different?

So that you are not kept in suspense, gentle reader, the evidence in favor of primitive communism is at best highly mixed. In some primitive tribes, perhaps especially the Aché hunter-gatherers who lived in what is now Paraguay, food was shared very equally. But the Aché appear to be at one end of the spectrum. In many other hunter-gatherer tribes, those who succeeded at hunting or gathering could distributed the product of their labor as they personally saw fit. In addition, even among the Aché, as well as every other hunter-gatherer tribe, there were a number of possessions held as private property. Singh writes:

{H]unter-gatherers are diverse. Most have been less communistic than the Aché. When we survey forager societies, for instance, we find that hunters in many communities enjoyed special rights. They kept trophies. They consumed organs and marrow before sharing. They received the tastiest parts and exclusive rights to a killed animal’s offspring. The most important privilege hunters enjoyed was selecting who gets meat.  … Compared with the Aché, many mobile, band-living foragers lay closer to the private end of the property continuum. Agta hunters in the Philippines set aside meat to trade with farmers. Meat brought in by a solitary Efe hunter in Central Africa was ‘entirely his to allocate’. And among the Sirionó, an Amazonian people who speak a language closely related to the Aché, people could do little about food-hoarding ‘except to go out and look for their own’. …

All hunter-gatherers had private property, even the Aché. Individual Aché owned bows, arrows, axes and cooking implements. Women owned the fruit they collected. Even meat became private property as it was handed out. … Some proponents of primitive communism concede that foragers owned small trinkets but insist they didn’t own wild resources. But this too is mistaken. Shoshone families owned eagle nests. Bearlake Athabaskans owned beaver dens and fishing sites. Especially common is the ownership of trees. When an Andaman Islander man stumbled upon a tree suitable for making canoes, he told his group mates about it. From then, it was his and his alone. Similar rules existed among the Deg Hit’an of Alaska, the Northern Paiute of the Great Basin, and the Enlhet of the arid Paraguayan plains. In fact, by one economist ’s estimate, more than 70 per cent of hunter-gatherer societies recognised private ownership over land or trees.

Singh describes how primitive societies often had severe punishments for those who transgressed the applicable property rights. In addition, the social bonds that led to extreme sharing among the Aché had some horrific consequences. When you engaged in extreme sharing, it was based on the idea that in the not-too-distant future you would also be the recipient of extreme sharing by others. But what about those who seemed unlikely to be contributors to future sharing? In Aché society, widows, the sick or disabled, and orphans were likely to be killed: “The Aché had among the highest infanticide and child homicide rates ever reported. Of children born in the forest, 14 per cent of boys and 23 per cent of girls were killed before the age of 10, nearly all of them orphans. An infant who lost their mother during the first year of life was always killed.”

We can debate why the vision of a pre-industrial sharing society has such a sentimental pull. But as a matter of fact, it seems to be a wildly oversimplified story. Singh writes: “For anyone hoping to critique existing institutions, primitive communism conveniently casts modern society as a perversion of a more prosocial human nature. Yet this storytelling is counterproductive. By drawing a contrast between an angelic past and our greedy present, primitive communism blinds us to the true determinants of trust, freedom and equity.”

I will leave my definitive discussion of “the true determinants of trust, freedom and equity” for another day. But in that discussion, the idea that human beings have a pure, primitive, natural, inclination toward trust, sharing, and mutual respect will not play a major role.

Hat tip: I ran across a mention of this article in a post by Alex Tabarrok at the remarkably useful Marginal Revolution website.

The Digital Currency Ecosystem and Blockchain

Here’s one intuitive way to think about how cryptocurrencies like Bitcoin work:

For cryptocurrencies, this database is called the blockchain. One can loosely think of the blockchain as a ledger of money accounts, in which each account is associated with a unique address. These money accounts are like post office boxes with windows that permit anyone visiting the post office to view the money balances contained in every account. These windows are perfectly secured. While anyone can look in, no one can access the money without the correct password. This password is created automatically when the account is opened and known only by the person who created the account (unless it is voluntarily or accidentally disclosed to others). The person’s account name is pseudonymous (unless voluntarily disclosed). These latter two properties imply that cryptocurrencies (and cryptoassets more generally) are digital bearer instruments. That is, ownership control is defined by possession (in this case, of the private password). …

As with physical cash, no permission is needed to acquire and spend cryptoassets. Nor is it required to disclose any personal information when opening an account. Anyone with access to the internet can download a cryptocurrency wallet—software that is used to communicate with the system’s miners (the aforementioned volunteer accountants). The wallet software simultaneously generates a public address (the “location” of an account) and a private key (password). Once this is done, the front-end experience for consumers to initiate payment requests and manage money balances is very similar to online banking as it exists today. Of course, if a private key is lost or stolen, there is no customer service department to call and no way to recover one’s money.

David Andolfatto and Fernando M. Martin offer this description in “The Blockchain Revolution: Decoding Digital Currencies,” which appears in the 2021 Annual Report of the St. Louis Federal Reserve.

The terminology of a “bearer” instrument may be unfamiliar to some readers, but it’s straightforward. Cash is a “bearer” instrument: the person with the cash can spend it. Back in the late 19th and early 20th century there used to be financial instruments called “bearer bonds,” immortalized in many a mystery/adventure novel, where whoever walked into a bank holding the physical piece of paper that represented the bond could deposit it without any other proof of ownership. Similarly, if an outside player can access to the digital record of a cryptocurrency, it can simply take the assets.

The concern is more than hypothetical. The Wall Street Journal ran a story late last week called “Crypto Thieves Get Bolder by the Heist, Stealing Record Amounts.” They point out that in the last 38 weeks, here have been 37 major hacks at cryptocurrency/blockchain organizations. Last year the losses were about $3.2 billion; this year, it may be larger. The most recent large hack was at a “stablecoin” currency called Beanstalk, essentially emptying it of digital assets valued at $182 million.

In the St. Louis Fed report, Andolfatto and Martin provide a readable overview of the blockchain-related ecosystems: cryptocurrency, stablecoins, central bank digital currency, decentralized finance, and more. Their tone is resolutely balanced, neither advocating or attacking these developments, but simply pointing out how they work and the tradeoffs involved. They write:

[P]ossibly the most attractive characteristic of Bitcoin is that it operates independently of any government or concentration of power. Bitcoin is a decentralized autonomous organization (DAO). Its laws and regulations exist as open-source computer code living on potentially millions of computers. The blockchain is beyond the (direct) reach of government interference or regulation. There is no physical location for Bitcoin. It is not a registered business. There is no CEO. Bitcoin has no (conventional) employees. The protocol produces a digital asset, the supply of which is, by design, capped at 21 million BTC. Participation is voluntary and permissionless. Large-value payments can be made across accounts quickly and cheaply. It is not too difficult to imagine how these properties can be attractive to many people.

There are some obvious practical applications of such currency. If you want to make an international payment, for example, setting up a cryptocurrency account and sharing the passwords with someone at the other end may have lower fees than the conventional ways of wiring money or transferring between international banks. The most prominent example is El Salvador, which has been using the US dollar as its official currency, but recently announced that it would also use Bitcoin as an official currency. This policy experiment–two official currencies whose values can fluctuate relative to each other–has some obvious built-in tensions.

But while blockchain-related transactions can certainly fulfill useful purposes in some cases, feels to me as if most of the energy around cryptocurrency is driven by to factors: the allure of anonymity and thus being able to sidestep existing regulations, and the allure of making money if cryptocurrencies rise in value relative to the US dollar. They write:

Much of the excitement associated with cryptocurrencies seems to stem from the prospect of making money through capital gains via currency appreciation relative to the U.S. dollar (USD). … To be sure, the price of a financial security can be related to its underlying fundamentals. It is not, however, entirely clear what these fundamentals are for cryptocurrency or how they might generate continued capital gains for investors beyond the initial rapid adoption phase. Moreover, while the supply of a given cryptocurrency such as Bitcoin may be capped, the supply of close substitutes (from the perspective of investors, not users) is potentially infinite. Thus, while the total market capitalization of cryptocurrencies may continue to grow, this growth may come more from newly created cryptocurrencies and not from growth in the per-unit price of any given cryptocurrency, such as Bitcoin.

Are US Fertility Rates Starting to Decline?

In the big-picture long-run sense, US fertility rates haven’t changed much in recent decades. Here’s a figure from Anne Morse of the US Census Bureau in “Stable Fertility Rates 1990-2019 Mask Distinct Variations by Age” (April 6, 2022).

The black line shows total births in the US; the red line shows the fertility rate. For the fertility rate, you can see the plunge in US fertility rates early in the 20th century, the low fertility rates of the Great Depression in the 1930s, the jump in fertility rates in the post-World War II “baby boom,” and the following decline in fertility rates. As the figure shows, US fertility rates have been fairly stable since the 1970s, albeit with what looks like a small drop after 2008 or so and the experience of the Great Recession.

The patterns of fertility by age are also shifting. In this figure, the blue line shows patterns of fertility by age in 1990, and then each line shows the pattern in a following year up to the orange line for fertility in 2019. Birthrates for those under age 19 have been falling fairly sharply; however, birth rates for those in their late 20s and older are higher in 2019 than they were in 1990 (which for this topic really isn’t all that long ago).

But although US fertility rates have be no means fallen off a cliff, are we seeing the beginning of a decline? Melissa S. Kearney, Phillip B. Levine, and Luke Pardue consider this topic “The Puzzle of Falling US Birth Rates since the Great Recession,” in the Winter 2022 issue of he Journal of Economic Perspectives. (Full disclosure: I’m Managing Editor of this journal. All articles in the JEP, back to the first issue, are freely available online compliments of the American Economic Association.) The authors focus in particular on birth rates from the mini-peak in 2007 up through 2020.

Part of what’s happening here is that birthrates tend to decline in recessions (as can also be seen in the longer-term figure above). But that explains less than half of the decline. The drop in 2020 is mostly not pandemic-related–after all, most of the children born in 2020 were conceived before the pandemic started. But pandemic also tend to reduce birthrates (as shown in the top figure with regard to the influenza epidemic of 1919), so birthrates are likely to drop lower in 2021 and this year.

One way to get some insight into the patterns here is to divide up the groups that contributed to the decline in US birth rates by age, education, and ethnicity. When Kearney, Levine, and Pardue do this, here’s what they find:

Overall, three-quarters of the fall in birth rates can be attributed to these eight groups. Three of the four biggest drops are for those in the age 15-19 bracket–which should probably be considered as overall good news. Also, four of the eight biggest drops are for white non-Hispanics of various education levels.

One factor here seems to be that birth rates for Hispanic women rose in the early 2000s and then fell. Morse of the Census Bureau puts it this way: “While fertility rates broadly declined in the United States from 1990-2019, there was a mini baby boom in the early 2000s. This increase was driven by foreign-born Hispanic women. This mini baby boom to foreign-born Hispanics ended in 2007, just before the Great Recession began later that year. … It is not clear what portion of the fertility decline to foreign-born Hispanics can be attributed to the economic downturn since the decline began before the Great Recession started. This decline may partially be due to the end of the mini baby boom for foreign-born Hispanic women and a return to long-term downward fertility trend.”

Similarly, in looking across US states, Kearney, Levine, and Pardue find that the states with the biggest drops in birthrates tend to be in the southwest, where Hispanic populations are generally higher.

Kearney, Levine, and Pardue argue that perhaps the most plausible overall explanation for the fall in birthrates is also the simplest: younger cohorts of women are growing up with a different idea of what they want from life.

It is unlikely that career aspirations or parenting norms changed exactly in or around 2007. Note, though, that women who grew up in the 1990s were the daughters of the 1970s generation and women who grew up in the 1970s and 1980s were daughters of the 1950s and 1960s generation. It seems plausible that these more recent cohorts of women were likely to be raised with stronger expectations of having life pursuits outside their roles as wives and mothers. It also seems likely
that the cohorts of young adults who grew up primarily in the 1990s or later—and
reached prime childbearing years around and post 2007—experienced more intensive
parenting from their own parents than those who grew up primarily in the 1970s and 1980s. They would have a different idea about what parenting involves. We speculate that these differences in formed aspirations and childhood experiences could potentially explain why more recent cohorts of young women are having fewer children than previous cohorts.

However, it’s also interesting that US women are having fewer children then they say, in survey data, that they would ideally desire.

Some nationally representative surveys ask women about their expectations or desires for childbearing. On this point, the number of children that women report wanting to have has been dropping slightly. Hartnett and Gemmill (2020) report that data from the 2006–2017 National Survey of Family Growth shows that the total number of children women intend to have declined (from 2.26 in 2006–2010 to 2.16 children in 2013–2017) and that the proportion of women intending to remain childless increased slightly. Women also tend to end up having fewer children than they say would be ideal and that gap has been growing(Stone 2021). One interpretation of this discrepancy is that it offers prima facie evidence that constraints or costs are playing a role in depressing birth rates. An alternative interpretation is that women report they want, say, two or three children, but when faced with actual trade-offs associated with having more children, they choose differently.

The Current Puzzle of Labor Markets

There’s a puzzle in US labor markets, as well as in labor markets in a number of other countries. On one side, the total number of US jobs is still lower than before the pandemic. Total US employment was 152.5 million in February 2020, but 150.9 million in March 2022.

But even though total jobs are down, the unemployment is back down to where it was before the pandemic. The unemployment rate was 3.5% in February 2020, and 3.6% in March 2022.

How can an economy have 1.6 million fewer jobs but (basically) the same unemployment rate? The answer is that the unemployment rate only counts people who are looking for jobs. If you are not looking for a job, then you are “out of the labor force”–and treated in the statistics like a retiree or a parent who is choosing to be home with children, not like someone who wants a job. Thus, 63.4% of adults were in the labor force (that is, employed or unemployed and looking for work) in February 2020, while only 62.4% were in the labor force in March 2022.

So here’s the puzzle: Is the labor market “tight,” with relatively few workers looking for jobs openings, as shown by the low unemployment rate? Or is the labor market “loose,” in the sense that the total number of jobs is still below the peak and at least some of the workers now out of the labor force might re-enter if given the opportunity?

The most recent World Economic Outlook for April 2022 just released by the IMF has some evidence here, suggesting four possible hypotheses for the current labor market:

(1) labor market mismatch—discrepancies between the types of vacant positions and the skills of job seekers; (2) health-related concerns, which may be a strong driver of the withdrawal of older workers from the workforce; (3) changing job preferences among workers, which may account in part for historically high quit rates—a phenomenon sometimes called the “Great Resignation”; and (4) school and childcare center disruptions leading mothers of young children to exit the labor force—the “She-cession.”

The labor market mismatch refers to people with experience in one sector of the economy who then have trouble shifting to another area. Early in the pandemic this was surely a major factor. But the IMF estimates that “[a]s of the third quarter of 2021, labor market mismatch accounted for at most one-fifth of the shortfall in the employment rate …” A drop in the number of older workers, probably partly out of health concerns, can account for about one-third of the drop in employment. The US Bureau of Labor Statistics doesn’t produce a regular employment estimate for mothers of young children, but there is at least partial evidence suggesting this is a factor, too.

The shift in job preferences seems like a major factor. Here’s how the IMF describes it:

Rates of voluntary job quits have reached historic highs in both countries (the United States and the United Kingdom]. There is tentative evidence that, beyond seizing new opportunities to move up the job ladder in tight labor markets, workers’ preferences may have partly shifted toward jobs that bring not only higher pay but also greater safety and flexibility. In particular, several industries in which job quit rates have risen the most involve a disproportionate share of contact-intensive, physically strenuous, less flexible, and low-paying jobs, such as in accommodation and food services and
retail trade.

Rising labor market tightness has spurred faster nominal wage growth, particularly for low-paying jobs. Since the start of the pandemic, the increase in tightness alone is estimated to have directly increased overall nominal UK and US wage inflation by approximately 1.5 percentage points. In low-pay industries, this impact has been much greater, reflecting both above-average increases in labor market tightness and a stronger historical link between tightness and wage growth in these industries. So far, overall implications of increased tightness for wage inflation have been muted, partly because low-wage workers account for a relatively small share of firms’ total labor costs …

The World Economic Outlook report draws upon an IMF Staff Discussion Note, “Labor Market Tightness in Advanced Economies,” by Romain A Duval, Yi Ji, Longji Li, Myrto Oikonomou, Carlo Pizzinelli, Ippei Shibata, Alessandra Sozzi, and Marina M. Tavares (March 2022). That report offers more detail across countries. Here are a couple of its conclusions:

Most labor markets are tighter that they were prior to COVID-19. These include English-speaking (Australia, Canada, United Kingdom, United States) and several northern and western continental European economies, while Germany and Japan still show lower vacancy-to-unemployment ratios than in 2019. Vacancies have risen steadily across all sectors, including those with more contact-intensive, less teleworkable, and/or lower-skilled jobs that were hit hard by the pandemic. Fears that COVID-19 might permanently destroy these jobs, including through automation, have not materialized so far.

Tight labor markets partly reflect reduced labor force participation, which has shrunk the pool of available job seekers. The main reason why employment remains subdued, particularly compared to precrisis trends, is that disadvantaged groups—including, depending on countries, the low-skilled, older workers, or women with young children—have yet to fully return to the labor market. Looking through cross-country heterogeneity, in the median country, low-skilled workers—about one-fourth of whom are older workers— account for more than two-thirds of the aggregate employment gap vis-à-vis its preCOVID-19 trend, while older workers as group contribute about one-third of the gap. In some cases, the decline in immigration also seems to have amplified labor shortages among low-skilled jobs.

Some Economics of Supply Chains

Here’s an example of a global supply chain, which is all the more powerful for its ordinariness. It’s taken from the most recent Economic Report of the President, produced each year by the White House Council of Economic Advisers. Each year, the report is an overview of the economy from the viewpoint of academic economists who are admittedly and overtly sympathetic to the president in power, but also who feel a professional compulsion to provide facts and figures and tables to back up their beliefs. Chapter 6 of this year’s report is entitled “Building Resilient Supply Chains.” The CEA writes:

The M9 hot tub is made by Bullfrog Spas in Utah, where 500 workers assemble almost 1,850 parts from 7 countries and 14 states (see figure 6-i). The hot tub top shell starts as a flat acrylic sheet from Kentucky, which is then combined with a different type of plastic in Nevada and sprayed with an industrial chemical from Georgia. Parts of the frame shell of the hot tub are driven in by trucks from Idaho several times a week. Many of the electric motors come from China and are assembled into water pumps in Mexico and then driven to Utah. Additional material for exterior cabinets is transported from Shanghai on container ships through the ports of Long Beach or Oakland. Water-spraying jets are made in Guangzhou, China; are sent through the Panama Canal and Eastern ports to the supplier’s warehouse in Cleveland, Tennessee; and then are sent on to Utah. Once fully assembled, the finished hot tubs are placed on trucks or trains and delivered to retailer warehouses. This example illustrates both the extent of outsourcing, which increases the number of individual companies involved in the production of a single good, and the geographic distance traveled by each component, estimated to total nearly 900,000 miles, as well as the dependence on transportation and logistics this entails.

On one side, it seems pretty clear that Bullfrog Spas would not be providing jobs and payroll in Harriman, Utah, without some kind of far-flung supply chain. The great benefit of supply chains is the ability to combine specialized inputs from all over the world. On the other side, any company with far-flung supply chains will be vulnerable to disruptions of those supply chains.

The CEA report emphasized two main reasons for growth of global supply chains in the last three decades or so.

The first change is increased access to foreign suppliers, making offshoring more cost-effective for firms, largely due to advances in information technology (IT) and reductions in trade barriers since the 1990s. Advances in IT allow firms to convey detailed information about product and process specifications across long distances, while improvements in transportation, such as containerization, allow goods to be moved more quickly and consistently … The second key change is the growing role of financial criteria and institutions in corporate decisionmaking. This “financialization” of the economy has encouraged outsourcing and offshoring because of savings in costs that are easily measurable. … Such incentives have encouraged managers to focus more on these financial statement numbers than on less easily measurable metrics, such as resilience. … This financialization of the economy has been an important driver of U.S. lead firms’ supply chain strategies. Outsourcing of production and other capital-intensive activities is prescribed by consulting firms promoting an “asset-light” strategy. These firms note that, all else held equal, a lower amount of capital makes a given amount of revenue yield a higher measured return on assets …

As global supply chains have wobbled in the COVID pandemic, the potential tradeoffs have become more clear. The most obvious tradeoff is the potential for gains from a smoothly functioning supply chain vs. the risks and costs of a disrupted supply chain. But the CEA report mentions a number of more subtle tradeoffs, which often depend on the exact nature of the supply chain.

For example, the cost of building a cutting-edge semiconductor fabrication plant now exceed $12 billion, and so most users of computer chips buy rather than make their own. But is the best choice to rely on very standardized chips, which can be purchased from several manufacturers, or to work closely with just one or two fabs to design customized chips–and to reserve production space for those chips?

The use of semiconductors in the auto industry illustrates this point. Although semiconductors became key to the operation of modern vehicles more than a decade ago, many automakers did not begin to communicate directly with semiconductor manufacturers until late 2021. Rather, they bought chips indirectly, through distributors or first-tier suppliers, and did not commit to purchases more than a few weeks out. Thus, although their product plans included more intensive use of semiconductors in future vehicles, automakers had not been credibly signaling this intention to manufacturers. Without this commitment, semiconductor manufacturers were unwilling to build new fabs for automotive-grade chips, since fabs must maintain very high capacity utilization to be profitable. Further, they did not devote resources to innovating on the dimensions important to automakers, such as reduced cost and increased reliability. In contrast, Apple has long paid to reserve capacity in advance at fabs, and has worked with semiconductor manufacturers and design firms to innovate on the dimensions important to them—speed and power …

Another tradeoff related to innovation is that getting chips from a long distance away give a user access to the cutting-edge technologies. However, there is also a belief that when engineers from chip-makers and chip-users are geographically closer, both sides may benefit:

However, there is evidence suggesting that geographically separating production and innovation impedes innovation. Engineers overseeing production are exposed to the capabilities and problems of existing technology, helping them to generate new ideas both for improving processes and for applying a given technology to new markets. Losing this exposure reduces the opportunity to generate such innovative ideas. For example, when production of consumer electronics migrated to Asia in the 1980s, the United States lost the potential to later compete in the burgeoning market for follow-on products like flat-panel displays, LED lighting, and advanced batteries ..

Yet another tradeoff of supply chains is that a lead firm, purchasing inputs from others, may buy from a supplier firm that offers considerably lower wages and benefits, or perhaps also worse working conditions. Especially if the supplier is another US firm, this ability to contract out jobs to a separate workforce under a different employer seems like a mixed blessing for the US workforce viewed as a whole.

There are of course ways to reduce vulnerability to short-term supply chain disruptions: holding larger inventories, making sure you have a geographically distributed range of suppliers for key inputs, planning ahead for shifting to other inputs if needed, and so on. The problem is that these solutions have costs of their own–and at some point, the costs of avoiding vulnerability to a disruption mean that that the extended supply chain isn’t worth it in the first place. But many firms that rely on long supply chains had, frankly, not given a lot of thought to their vulnerability before the pandemic.

Government can sometimes play a role, too. One vicious circle in supply chain disruptions is that all the buyers are trying to build up their own inventories–which of course makes the shortage worse. In some cases, the government can help alleviate this hoarding by providing shared information about the market.

For instance, the U.S. Department of Health and Human Services has taken on an important role in providing an accurate demand signal for PPE. The department’s Supply Chain Control Tower receives near-daily data from distributors that represent more than 80 percent of the volume for the commodities it is tracking, along with supply status from 5,000 hospitals. This dashboard alleviates hospitals’ fear of shortages, so they do not need to incur extra costs of holding inventory. The dashboard also allows distributors to receive a truer demand signal by reducing excessive ordering that exacerbates supply constraints (U.S. Department of Health and Human Services 2022, 13). In cases such as these, the public sector is well positioned to collect, aggregate, and disseminate this information.

Another role for government is in setting the conditions for investment in infrastructure–for example, to reduce the risk that US ports become clogged. In a different chapter, the CEA notes:

There is much evidence that the United States lags far behind its competitors in supplying the essential inputs to economic capacity. U.S. infrastructure provides several examples. The World Economic Forum’s Global Competitiveness Report found in 2019 that, out of 141 countries, the United States ranked 13th in quality of overall infrastructure, 17th in quality of road infrastructure, 23rd in electricity supply quality, and 30th in reliability of water supply (Schwab 2019). A separate ranking of global ports by the World Bank and IHS Markit found that no U.S. port made it into the top 50 globally, and just 4 are in the top 100. By comparison, of the top 10 ports, several are in China. The Federal Communications Commission (FCC 2018) has also ranked the United States 10th among developed countries for broadband speed and connectivity. In transporting goods and services, in connecting workers around the country and globe, in transforming technological progress into productivity gains, the United States is not at the frontier.

Supply chains offer enormous economic benefits. But one hopes that a lasting economic lesson from the pandemic, for both private firms and government actors, is to think more seriously about the risks and vulnerabilities of supply chains.

The Case for Doubling US R&D Spending

“We massively underinvest in science and innovation, with implications for our standards of living, health, national competitiveness, and capacity to respond to crisis.” Benjamin F. Jones makes the case in “Science and Innovation: The Under-Fueled E/ngine of Prosperity.” It’s one of eight essays appearing in an e-book on Rebuilding the Post-Pandemic Economy (Aspen Economic Strategy Group, November 2021). Jones offers some vivid reminders that increases in standard of living–not just purely economic, but also other aspects like health–are closely related to investments in new technologies.

Real income per-capita in the United States is 18 times larger today than it was in 1870 (Jones 2016). These gains follow from massive increases in productivity. For example, U.S. corn farmers produce 12 times the farm output per hour since just 1950 (Fuglie et al. 2007; USDA 2020). Better biology (seeds, genetic engineering), chemistry (fertilizers, pesticides), and machinery (tractors, combine harvesters) have revolutionized agricultural productivity (Alston and Pardey 2021), to the point that in 2018 a single combine harvester, operating on a farm in Illinois, harvested 3.5 million pounds of corn in just 12 hours (CLASS, n.d.). In 1850, it took five months in a covered wagon to travel west from Missouri to Oregon and California, but today it can be done in five hours—traveling seven miles up in the sky. Today, people carry smartphones that are computationally more powerful than a 1980s-era Cray II supercomputer, allowing an array of previously hard-to-imagine things—such as conducting a video call with distant family members while riding in the back of a car that was hailed using GPS satellites overhead.

Improvements in health are also striking: Life expectancy has increased by 35 years since the late 19th century, when about one in five children born did not reach their first birthday (Murphy and Topel 2000). Back then, typhoid, cholera, and other diseases ran rampant, Louis Pasteur had just formulated the germ theory of disease, which struggled to gain acceptance, and antibiotics did not exist. In the 1880s, even for those who managed to reach age 10, U.S. life expectancy was just age 48 (Costa 2015). Overall, when examining health and longevity, real income, or the rising productivity in agriculture, transportation, manufacturing, and other sectors of the economy, the central roles of scientific and technological progress are readily apparent and repeatedly affirmed (Mokyr 1990; Solow 1956; Cutler et al. 2006; Alston and Pardey 2021; Waldfogel 2021).

Jones emphasizes some other gains from technology as well. For example, technology can offer flexibility in confronting various threats. Without decades of earlier research, COVID vaccines could not have been developed in less than a year after the pandemic hit: “Whether facing a pandemic, climate change, cybersecurity threats, outright conflict, or other challenges, a robust capacity to innovate—and to do so quickly—appears central to national security and national resilience.”

Moreover, it’s worth remembering that many countries in the rest of the world have active research and development efforts in many areas. The technology frontier is a moving target. The US will either stay near the lead in many of these areas, or fall behind.

In the mid-1990s, the United States was in the top five of countries globally in both total R&D expenditure as a share of GDP and public R&D expenditure as a share of GDP (Hourihan 2020). Today, the United States ranks 10th and 14th in these metrics, and U.S. public expenditure on R&D as a share of GDP is now at the lowest level in nearly 70 years. … By contrast, China has massively increased its science and innovation investments in pursuit of leading the world economically and strengthening its hand in global affairs. China’s R&D expenditure has grown 16% annually since the year 2000, compared to 3% annually in the United States. If China implements its current five-year plan, it will soon exceed the United States in total R&D expenditure.

Jones’s essay reviews the argument, fairly standard among economists, that a pure free market will tend to underinvest in new technologies, because in a pure free market the innovator will not capture the full value of an innovation. Indeed, if firms face a situation where unsuccessful attempts at innovation just lose money, while successful innovations are readily copied by others, or the underlying ideas of the innovation just lead to related breakthroughs for others, then the incentives to innovate can become rather thin, indeed. This is the economic rationale for government policies to support research and development: direct support of basic research (where the commercial applications can be quite unclear), protection of intellectual property like patents and trade secrets, tax breaks for companies that spend money on R&D, and so on.

A key insight is that many innovations build on other insights in unexpected ways. Here are a couple of vivid examples from Jones: the link from Albert Einstein to Uber, and the link from life in hot springs to genetic science.

Uber is a novel business model that has disrupted the transportation sector, and to the user Uber might appear as a simple mobile app enabling a new business idea. But Uber relies on a string of prior scientific achievements. Among them is GPS technology, embedded in the smartphone and in satellites overhead, which allows the driver and rider to match and meet. The GPS system in turn works by comparing extremely accurate time signals from atomic clocks on the satellites. But because the satellites are moving at high velocity compared to app users and experience less gravity, time is ticking at a different speed on the satellites, according to Einstein’s mind-bending theories of special and general relativity. In practice, the atomic clocks are adjusted according to Einstein’s equations, before the satellite is launched, to account exactly for these relativistic effects. Without these corrections, the system would not work. There is thus a series of intertemporal spillovers from Einstein to the GPS system to the smartphone to Uber (not to mention all the other innovations, mobile applications, and new businesses that rely on GPS technology) …

To study DNA, it must first be replicated into measurable quantities, and this replication process depends on many prior scientific advances. One critical if unexpected advance occurred in 1969, when two University of Indiana biologists, Thomas Brock and Hudson Freeze, were exploring hot springs in Yellowstone National Park. Brock and Freeze were asking a simple question: can life exist in such hot environments? They discovered a bacterium that not only survived but thrived—a so-called extremophile organism—which they named Thermus aquaticus. Like Einstein’s work on relativity, this type of scientific inquiry was motivated by a desire for a deeper understanding of nature, and it had no obvious or immediate application. However, in the 1980s, Kary Mullis at the Cetus Corporation was searching for an enzyme that could efficiently replicate human DNA. Such replication faces a deep challenge: it needs to be conducted at high heat, where the DNA unwinds and can be copied, but at high heat replication enzymes do not hold together. Mullis, in a Eureka moment, recalled the story of Thermus aquaticus, knowing that this little bacterium must be able to replicate its DNA at high heat given its environment. And indeed, Thermus aquaticus turned out to provide what was needed. Its replication enzyme was declared by Science Magazine to be the “molecule of the year” in 1989. Mullis would be awarded a Nobel Prize soon after, and the biotechnology industry would boom, opening new chapters of human progress.

When the spin-off effects to other discoveries and inventions are taken into account, the gains to research and development are enormous. What would you have been willing to pay for a COVID vaccine in early 2020? Jones says it this way:

Notably, these social returns are not just good: They are enormous. Effectively, the science and innovation system is akin to having a machine where society can put in $1 and get back $5 or more. If any business or household had such a machine, they would use it all the time. But this machine is society’s. The gains from investment largely accrue to others—not so much to the specific person who puts the dollar into the machine.

Of course, it’s impossible to know in advance exactly what ideas are going to be important, or what firms are going to be success stories. Indeed, one problem with relying on the private sector for R&D is that there is a tendency for firms to focus on the technologies that look most profitable in the short- or medium-terms, rather than building up a broad portfolio of knowledge that can be used in many ways. It’s important to let a thousand flowers blossom–because, if I can mix my metaphors, one of those flowers will grow into a mighty redwood. Jones says it more neatly: “In many ways, the vision of science and innovation needs to be the opposite of `picking winners.’ Rather, we need to `pick portfolios,’ with an emphasis on both increasing the scale of funding and human capital, and the diversity of approaches that are taken.”

Jones has lots of other points to make about technology and research–for example, although it’s widely believed that innovations are more likely to come from younger researchers, this does not actually seem to be true. But the bottom line is that when economists try to calculate the broad social returns to investing in research and development, it’s common to find estimates of annual returns in range of 40-50%. He argues that “a sustained doubling of all forms of R&D expenditure in the U.S. economy could raise U.S. productivity and real per-capita income growth rates by an additional 0.5 percentage points per year over a long time horizon.” And course, these economic gains don’t include the gains to health, or a greater ability to respond in crises, or the benefits of maintaining US global competitiveness.

Jones is also thoughtful in noting that national efforts at research and technology, and at applying those innovations in the broader economy, are not just a matter of budgetary appropriations. It’s necessary to expand the number of researchers and laboratories, which in turn means increasing the pipeline of people with the interests and skills to do that work, which in turn means reaching back to college and high school and elementary school–because someone who, say, leaves fourth grade without being able to do basic arithmetic is likely to have a much harder time becoming a researcher someday. This literature sometimes discusses the problem of “lost Einsteins”–those American children who never got the support and encouragement to develop their underlying abilities in math, science, and innovation.

Another part of the picture–and a faster way to expand US R&D than expanding the pool of students interested in these areas–is to encourage skilled immigration.

In a systematic study of inventors in the United States, Bernstein et al. (2019) examine the role of immigrants in U.S. invention. The central finding is that immigrants are especially productive in inventive activity. Not only do immigrants patent more often than U.S.-born individuals, but their patents are both more impactful for future invention and have greater market value. Overall, immigrants produce twice as many patents as one would expect from their population share. This is consistent more broadly with the STEM orientation of the immigrant workforce. While immigrants make up about 14% of the U.S. workforce, they account for 29% of the college-educated science and engineering workforce and 52% of science and engineering doctorates (Kerr and Kerr 2020). Overall, immigrants have accounted for about 30% of U.S. inventive activity since 1976 (Bernstein et al. 2019).

A similar picture emerges when examining entrepreneurship. Azoulay et al. (2021) study every new venture in the United States founded from 2007 through 2014 and examine whether the founders were born in the United States or abroad. They find that immigrants are 80% more likely to start a company than U.S.-born individuals. Moreover, immigrant founders are more likely to start companies of every size, including the highest-growth and most successful new businesses (see Figure 6).16 Indeed, looking at Fortune 500 firms today and tracing them back to their founding roots, one similarly finds a disproportionate role of immigrant founders—from Alexander Graham Bell to Sergey Brin to Elon Musk. A remarkable finding here is that immigrant-founded firms employ more people in total than there are immigrants in the U.S. workforce.