Universal Schooling in Developing Countries

There are no examples of countries with generally low levels of educational achievement that are also high-income nations; conversely, there are many examples of countries where the level of educational achievement first increased substantially and then was followed by economic growth. Eric A. Hanushek and Ludger Woessman describe their recent research on basic skills around the world in “The Basic Skills Gap” (Finance & Development, June 2022).

Six stylized facts summarize the development challenges presented by global deficits in basic skills:

1. At least two-thirds of the world’s youth do not obtain basic skills.

2. The share of young people who do not reach basic skills exceeds half in 101 countries and rises above 90 percent in 37 of these.

3. Even in high-income countries, a quarter of young people lack basic skills.

4. Skill deficits reach 94 percent in sub-Saharan Africa and 90 percent in south Asia, but they also hit 70 percent in Middle East and North Africa and 66 percent in Latin America.

5. While skill gaps are most apparent for the third of global youth not attending secondary school, fully 62 percent of the world’s secondary school students fail to reach basic skills.

6. Half of the world’s young people live in the 35 countries that do not participate in international testing, resulting in a lack of regular foundational performance information.

They write: “The key to improvement is an unwavering focus on the policy goal: improving student achievement. There is no obvious panacea, and effective policies may differ according to context. But the evidence points to the importance of focusing on incentives related to educational outcomes, which is best achieved through the institutional structures of the school system. Notably, education policies that develop effective accountability systems, promote choice, emphasize teacher quality, and provide direct rewards for good performance offer promise, supported by evidence.”

But what specific strategies might be most useful in addressing these issues in developing countries? Justin Sandefur has edited an e-book for the Center for Global Development, including six essays with comments, entitled Schooling for All: Feasible Strategies to Achieve Universal Education.

Around the world, many countries have achieved a substantial increase in primary school enrollment, but only a very modest increase in secondary school enrollment. In Chapter 3, Lee Crawfurd and Aisha Ali make “The Case for Free Secondary Education.” A lot of their proposal comes down the basics: more teachers and more schools. In many developing countries, students must pass an entrance examination before attending secondary school, and if they pass the exam, they then need to pay fees. Here’s the result:

A common concern–and one of the justifications for charging fees in secondary schools–is that many children from lowest-income families still face considerable barriers to success in primary school. Thus, free secondary school could tend to be a regressive program, benefiting mainly children from higher-income families. Thus, a challenge lurking in the background is how to support children from the lowest-income families in their primary education, so they are not already far behind as early as third or fourth grade.

Lee Crawfurd and Alexis Le Nestour look at the evidence on “How Much Should
Governments Spend on Teachers?” They conclude that although hiring more teachers will be necessary if schooling is to expand, there isn’t much evidence that higher pay for the existing teachers will make a substantial difference in the performance of the lowest-income children in the early grades. In that spirit, the notion of feeding children in school comes up in several of the essays, and is the focus of “Chapter 2: Feed All the Kids,” by Biniam Bedasso. He writes:

School feeding programs have emerged as one of the most common social policy interventions in a wide range of developing countries over the past few decades. Before the disruptions caused by the COVID-19 pandemic, nearly half the world’s schoolchildren,
about 388 million, received a meal at school every day (WFP 2020). As such, school feeding is regarded as the most ubiquitous instrument of social protection in the world employed by developing and developed countries alike. But school feeding is also a human capital
development tool. The theory of change for school feeding programs is rooted in the synergistic relationship between childhood nutrition, health, and education
underscored in the integrated human capital approach (Schultz 1997). The stock of human capital acquired as an adult—a key determinant of productivity— is supposed to be a function of the individual and interactive effects of schooling, nutrition, health,
and mobility. … A survey of government officials in 41 Sub-Saharan Africa countries conducted by the Food and Agriculture Organization of the United Nations (FAO) in 2016 shows that a great majority of the countries implement school feeding programs to help achieve education objectives. …

A review of 11 experimental and quasi-experimental studies from low- and middle-income countries reveals that school feeding contributes to better learning outcomes at the same time as it keeps vulnerable children in school and improves gender equity in education. Although school feeding might appear cost-ineffective compared with specialized education or social protection interventions, the economies of scope it generates are likely to make it a worthwhile investment particularly in food-insecure areas.

In short, Bedasso argues that feeding children at school in developing countries probably pays off just in terms of educational outcomes. But even if the payoff just in terms of education isn’t exceptionally high, the payoff to improved child nutrition in general takes many forms, including improved health and gender equity.

The case for feeding children at school seems strong to me. But it’s only one part of addressing the problem. Many developing countries have dramatically increased primary school enrollments, but they are not yet assuring that most children can keep up and actualy achieve basic literacy and numeracy at the primary school level.

A related problem is that the money for this broader agenda is lacking. Jack Rossiter contributes Chapter 6, which is titled “Finance: Ambition Meets Reality. He looks at the costs of universal primary and secondary school spending, along with school meals, and concludes that the cost would be about $1.9 trillion for low- and middle-income countries in 2030. However, the projected education spending for these countries is about $750 billion less. Outside foreign aid might conceivably fill $50 billion of the gap, but probably not more. Rossiter makes the grim case:

Even if international financing comes in line to meet targets, governments are not going to have anything like the sums that costing exercises require. We can choose to ignore this shortfall, stick with plans, and watch costs creep up. Or we can see it as a serious budget constraint, redirect our attention toward finding ways to push costs down, and try hard to get close to universal access in the next decade.

It’s of course tempting to elide these tradeoffs. Maybe if we just root out waste, fraud, and abuse, we will have all the funds we need? Doubtful. As Rossiter points out, universal access by 2030 may require scaling back on the nice-to-haves–like smaller class sizes, higher pay for teachers, new classroom materials, and so on–and being quite hard-headed about the must-haves.

Was Primitive Communism Ever Real?

There’s an image many of us carry around in our minds, an image of a primitive time when small groups of people lived together and shared equally. Manvir Singh describes the evidence in “Primitive communism: Marx’s idea that societies were naturally egalitarian and communal before farming is widely influential and quite wrong” (Aeon, April 19, 2022). He writes:

The idea goes like this. Once upon a time, private property was unknown. Food went to those in need. Everyone was cared for. Then agriculture arose and, with it, ownership over land, labour and wild resources. The organic community splintered under the weight of competition. The story predates Marx and Engels. The patron saint of capitalism, Adam Smith, proposed something similar, as did the 19th-century American anthropologist Lewis Henry Morgan. Even ancient Buddhist texts described a pre-state society free of property. … Today, many writers and academics still treat primitive communism as a historical fact. …

Primitive communism is appealing. It endorses an Edenic image of humanity, one in which modernity has corrupted our natural goodness. But this is precisely why we should question it. If a century and a half of research on humanity has taught us anything, it is to be sceptical of the seductive. From race science to the noble savage, the history of anthropology is cluttered with the corpses of convenient stories, of narratives that misrepresent human diversity to advance ideological aims. Is primitive communism any different?

So that you are not kept in suspense, gentle reader, the evidence in favor of primitive communism is at best highly mixed. In some primitive tribes, perhaps especially the Aché hunter-gatherers who lived in what is now Paraguay, food was shared very equally. But the Aché appear to be at one end of the spectrum. In many other hunter-gatherer tribes, those who succeeded at hunting or gathering could distributed the product of their labor as they personally saw fit. In addition, even among the Aché, as well as every other hunter-gatherer tribe, there were a number of possessions held as private property. Singh writes:

{H]unter-gatherers are diverse. Most have been less communistic than the Aché. When we survey forager societies, for instance, we find that hunters in many communities enjoyed special rights. They kept trophies. They consumed organs and marrow before sharing. They received the tastiest parts and exclusive rights to a killed animal’s offspring. The most important privilege hunters enjoyed was selecting who gets meat.  … Compared with the Aché, many mobile, band-living foragers lay closer to the private end of the property continuum. Agta hunters in the Philippines set aside meat to trade with farmers. Meat brought in by a solitary Efe hunter in Central Africa was ‘entirely his to allocate’. And among the Sirionó, an Amazonian people who speak a language closely related to the Aché, people could do little about food-hoarding ‘except to go out and look for their own’. …

All hunter-gatherers had private property, even the Aché. Individual Aché owned bows, arrows, axes and cooking implements. Women owned the fruit they collected. Even meat became private property as it was handed out. … Some proponents of primitive communism concede that foragers owned small trinkets but insist they didn’t own wild resources. But this too is mistaken. Shoshone families owned eagle nests. Bearlake Athabaskans owned beaver dens and fishing sites. Especially common is the ownership of trees. When an Andaman Islander man stumbled upon a tree suitable for making canoes, he told his group mates about it. From then, it was his and his alone. Similar rules existed among the Deg Hit’an of Alaska, the Northern Paiute of the Great Basin, and the Enlhet of the arid Paraguayan plains. In fact, by one economist ’s estimate, more than 70 per cent of hunter-gatherer societies recognised private ownership over land or trees.

Singh describes how primitive societies often had severe punishments for those who transgressed the applicable property rights. In addition, the social bonds that led to extreme sharing among the Aché had some horrific consequences. When you engaged in extreme sharing, it was based on the idea that in the not-too-distant future you would also be the recipient of extreme sharing by others. But what about those who seemed unlikely to be contributors to future sharing? In Aché society, widows, the sick or disabled, and orphans were likely to be killed: “The Aché had among the highest infanticide and child homicide rates ever reported. Of children born in the forest, 14 per cent of boys and 23 per cent of girls were killed before the age of 10, nearly all of them orphans. An infant who lost their mother during the first year of life was always killed.”

We can debate why the vision of a pre-industrial sharing society has such a sentimental pull. But as a matter of fact, it seems to be a wildly oversimplified story. Singh writes: “For anyone hoping to critique existing institutions, primitive communism conveniently casts modern society as a perversion of a more prosocial human nature. Yet this storytelling is counterproductive. By drawing a contrast between an angelic past and our greedy present, primitive communism blinds us to the true determinants of trust, freedom and equity.”

I will leave my definitive discussion of “the true determinants of trust, freedom and equity” for another day. But in that discussion, the idea that human beings have a pure, primitive, natural, inclination toward trust, sharing, and mutual respect will not play a major role.

Hat tip: I ran across a mention of this article in a post by Alex Tabarrok at the remarkably useful Marginal Revolution website.

The Digital Currency Ecosystem and Blockchain

Here’s one intuitive way to think about how cryptocurrencies like Bitcoin work:

For cryptocurrencies, this database is called the blockchain. One can loosely think of the blockchain as a ledger of money accounts, in which each account is associated with a unique address. These money accounts are like post office boxes with windows that permit anyone visiting the post office to view the money balances contained in every account. These windows are perfectly secured. While anyone can look in, no one can access the money without the correct password. This password is created automatically when the account is opened and known only by the person who created the account (unless it is voluntarily or accidentally disclosed to others). The person’s account name is pseudonymous (unless voluntarily disclosed). These latter two properties imply that cryptocurrencies (and cryptoassets more generally) are digital bearer instruments. That is, ownership control is defined by possession (in this case, of the private password). …

As with physical cash, no permission is needed to acquire and spend cryptoassets. Nor is it required to disclose any personal information when opening an account. Anyone with access to the internet can download a cryptocurrency wallet—software that is used to communicate with the system’s miners (the aforementioned volunteer accountants). The wallet software simultaneously generates a public address (the “location” of an account) and a private key (password). Once this is done, the front-end experience for consumers to initiate payment requests and manage money balances is very similar to online banking as it exists today. Of course, if a private key is lost or stolen, there is no customer service department to call and no way to recover one’s money.

David Andolfatto and Fernando M. Martin offer this description in “The Blockchain Revolution: Decoding Digital Currencies,” which appears in the 2021 Annual Report of the St. Louis Federal Reserve.

The terminology of a “bearer” instrument may be unfamiliar to some readers, but it’s straightforward. Cash is a “bearer” instrument: the person with the cash can spend it. Back in the late 19th and early 20th century there used to be financial instruments called “bearer bonds,” immortalized in many a mystery/adventure novel, where whoever walked into a bank holding the physical piece of paper that represented the bond could deposit it without any other proof of ownership. Similarly, if an outside player can access to the digital record of a cryptocurrency, it can simply take the assets.

The concern is more than hypothetical. The Wall Street Journal ran a story late last week called “Crypto Thieves Get Bolder by the Heist, Stealing Record Amounts.” They point out that in the last 38 weeks, here have been 37 major hacks at cryptocurrency/blockchain organizations. Last year the losses were about $3.2 billion; this year, it may be larger. The most recent large hack was at a “stablecoin” currency called Beanstalk, essentially emptying it of digital assets valued at $182 million.

In the St. Louis Fed report, Andolfatto and Martin provide a readable overview of the blockchain-related ecosystems: cryptocurrency, stablecoins, central bank digital currency, decentralized finance, and more. Their tone is resolutely balanced, neither advocating or attacking these developments, but simply pointing out how they work and the tradeoffs involved. They write:

[P]ossibly the most attractive characteristic of Bitcoin is that it operates independently of any government or concentration of power. Bitcoin is a decentralized autonomous organization (DAO). Its laws and regulations exist as open-source computer code living on potentially millions of computers. The blockchain is beyond the (direct) reach of government interference or regulation. There is no physical location for Bitcoin. It is not a registered business. There is no CEO. Bitcoin has no (conventional) employees. The protocol produces a digital asset, the supply of which is, by design, capped at 21 million BTC. Participation is voluntary and permissionless. Large-value payments can be made across accounts quickly and cheaply. It is not too difficult to imagine how these properties can be attractive to many people.

There are some obvious practical applications of such currency. If you want to make an international payment, for example, setting up a cryptocurrency account and sharing the passwords with someone at the other end may have lower fees than the conventional ways of wiring money or transferring between international banks. The most prominent example is El Salvador, which has been using the US dollar as its official currency, but recently announced that it would also use Bitcoin as an official currency. This policy experiment–two official currencies whose values can fluctuate relative to each other–has some obvious built-in tensions.

But while blockchain-related transactions can certainly fulfill useful purposes in some cases, feels to me as if most of the energy around cryptocurrency is driven by to factors: the allure of anonymity and thus being able to sidestep existing regulations, and the allure of making money if cryptocurrencies rise in value relative to the US dollar. They write:

Much of the excitement associated with cryptocurrencies seems to stem from the prospect of making money through capital gains via currency appreciation relative to the U.S. dollar (USD). … To be sure, the price of a financial security can be related to its underlying fundamentals. It is not, however, entirely clear what these fundamentals are for cryptocurrency or how they might generate continued capital gains for investors beyond the initial rapid adoption phase. Moreover, while the supply of a given cryptocurrency such as Bitcoin may be capped, the supply of close substitutes (from the perspective of investors, not users) is potentially infinite. Thus, while the total market capitalization of cryptocurrencies may continue to grow, this growth may come more from newly created cryptocurrencies and not from growth in the per-unit price of any given cryptocurrency, such as Bitcoin.

Are US Fertility Rates Starting to Decline?

In the big-picture long-run sense, US fertility rates haven’t changed much in recent decades. Here’s a figure from Anne Morse of the US Census Bureau in “Stable Fertility Rates 1990-2019 Mask Distinct Variations by Age” (April 6, 2022).

The black line shows total births in the US; the red line shows the fertility rate. For the fertility rate, you can see the plunge in US fertility rates early in the 20th century, the low fertility rates of the Great Depression in the 1930s, the jump in fertility rates in the post-World War II “baby boom,” and the following decline in fertility rates. As the figure shows, US fertility rates have been fairly stable since the 1970s, albeit with what looks like a small drop after 2008 or so and the experience of the Great Recession.

The patterns of fertility by age are also shifting. In this figure, the blue line shows patterns of fertility by age in 1990, and then each line shows the pattern in a following year up to the orange line for fertility in 2019. Birthrates for those under age 19 have been falling fairly sharply; however, birth rates for those in their late 20s and older are higher in 2019 than they were in 1990 (which for this topic really isn’t all that long ago).

But although US fertility rates have be no means fallen off a cliff, are we seeing the beginning of a decline? Melissa S. Kearney, Phillip B. Levine, and Luke Pardue consider this topic “The Puzzle of Falling US Birth Rates since the Great Recession,” in the Winter 2022 issue of he Journal of Economic Perspectives. (Full disclosure: I’m Managing Editor of this journal. All articles in the JEP, back to the first issue, are freely available online compliments of the American Economic Association.) The authors focus in particular on birth rates from the mini-peak in 2007 up through 2020.

Part of what’s happening here is that birthrates tend to decline in recessions (as can also be seen in the longer-term figure above). But that explains less than half of the decline. The drop in 2020 is mostly not pandemic-related–after all, most of the children born in 2020 were conceived before the pandemic started. But pandemic also tend to reduce birthrates (as shown in the top figure with regard to the influenza epidemic of 1919), so birthrates are likely to drop lower in 2021 and this year.

One way to get some insight into the patterns here is to divide up the groups that contributed to the decline in US birth rates by age, education, and ethnicity. When Kearney, Levine, and Pardue do this, here’s what they find:

Overall, three-quarters of the fall in birth rates can be attributed to these eight groups. Three of the four biggest drops are for those in the age 15-19 bracket–which should probably be considered as overall good news. Also, four of the eight biggest drops are for white non-Hispanics of various education levels.

One factor here seems to be that birth rates for Hispanic women rose in the early 2000s and then fell. Morse of the Census Bureau puts it this way: “While fertility rates broadly declined in the United States from 1990-2019, there was a mini baby boom in the early 2000s. This increase was driven by foreign-born Hispanic women. This mini baby boom to foreign-born Hispanics ended in 2007, just before the Great Recession began later that year. … It is not clear what portion of the fertility decline to foreign-born Hispanics can be attributed to the economic downturn since the decline began before the Great Recession started. This decline may partially be due to the end of the mini baby boom for foreign-born Hispanic women and a return to long-term downward fertility trend.”

Similarly, in looking across US states, Kearney, Levine, and Pardue find that the states with the biggest drops in birthrates tend to be in the southwest, where Hispanic populations are generally higher.

Kearney, Levine, and Pardue argue that perhaps the most plausible overall explanation for the fall in birthrates is also the simplest: younger cohorts of women are growing up with a different idea of what they want from life.

It is unlikely that career aspirations or parenting norms changed exactly in or around 2007. Note, though, that women who grew up in the 1990s were the daughters of the 1970s generation and women who grew up in the 1970s and 1980s were daughters of the 1950s and 1960s generation. It seems plausible that these more recent cohorts of women were likely to be raised with stronger expectations of having life pursuits outside their roles as wives and mothers. It also seems likely
that the cohorts of young adults who grew up primarily in the 1990s or later—and
reached prime childbearing years around and post 2007—experienced more intensive
parenting from their own parents than those who grew up primarily in the 1970s and 1980s. They would have a different idea about what parenting involves. We speculate that these differences in formed aspirations and childhood experiences could potentially explain why more recent cohorts of young women are having fewer children than previous cohorts.

However, it’s also interesting that US women are having fewer children then they say, in survey data, that they would ideally desire.

Some nationally representative surveys ask women about their expectations or desires for childbearing. On this point, the number of children that women report wanting to have has been dropping slightly. Hartnett and Gemmill (2020) report that data from the 2006–2017 National Survey of Family Growth shows that the total number of children women intend to have declined (from 2.26 in 2006–2010 to 2.16 children in 2013–2017) and that the proportion of women intending to remain childless increased slightly. Women also tend to end up having fewer children than they say would be ideal and that gap has been growing(Stone 2021). One interpretation of this discrepancy is that it offers prima facie evidence that constraints or costs are playing a role in depressing birth rates. An alternative interpretation is that women report they want, say, two or three children, but when faced with actual trade-offs associated with having more children, they choose differently.

The Current Puzzle of Labor Markets

There’s a puzzle in US labor markets, as well as in labor markets in a number of other countries. On one side, the total number of US jobs is still lower than before the pandemic. Total US employment was 152.5 million in February 2020, but 150.9 million in March 2022.

But even though total jobs are down, the unemployment is back down to where it was before the pandemic. The unemployment rate was 3.5% in February 2020, and 3.6% in March 2022.

How can an economy have 1.6 million fewer jobs but (basically) the same unemployment rate? The answer is that the unemployment rate only counts people who are looking for jobs. If you are not looking for a job, then you are “out of the labor force”–and treated in the statistics like a retiree or a parent who is choosing to be home with children, not like someone who wants a job. Thus, 63.4% of adults were in the labor force (that is, employed or unemployed and looking for work) in February 2020, while only 62.4% were in the labor force in March 2022.

So here’s the puzzle: Is the labor market “tight,” with relatively few workers looking for jobs openings, as shown by the low unemployment rate? Or is the labor market “loose,” in the sense that the total number of jobs is still below the peak and at least some of the workers now out of the labor force might re-enter if given the opportunity?

The most recent World Economic Outlook for April 2022 just released by the IMF has some evidence here, suggesting four possible hypotheses for the current labor market:

(1) labor market mismatch—discrepancies between the types of vacant positions and the skills of job seekers; (2) health-related concerns, which may be a strong driver of the withdrawal of older workers from the workforce; (3) changing job preferences among workers, which may account in part for historically high quit rates—a phenomenon sometimes called the “Great Resignation”; and (4) school and childcare center disruptions leading mothers of young children to exit the labor force—the “She-cession.”

The labor market mismatch refers to people with experience in one sector of the economy who then have trouble shifting to another area. Early in the pandemic this was surely a major factor. But the IMF estimates that “[a]s of the third quarter of 2021, labor market mismatch accounted for at most one-fifth of the shortfall in the employment rate …” A drop in the number of older workers, probably partly out of health concerns, can account for about one-third of the drop in employment. The US Bureau of Labor Statistics doesn’t produce a regular employment estimate for mothers of young children, but there is at least partial evidence suggesting this is a factor, too.

The shift in job preferences seems like a major factor. Here’s how the IMF describes it:

Rates of voluntary job quits have reached historic highs in both countries (the United States and the United Kingdom]. There is tentative evidence that, beyond seizing new opportunities to move up the job ladder in tight labor markets, workers’ preferences may have partly shifted toward jobs that bring not only higher pay but also greater safety and flexibility. In particular, several industries in which job quit rates have risen the most involve a disproportionate share of contact-intensive, physically strenuous, less flexible, and low-paying jobs, such as in accommodation and food services and
retail trade.

Rising labor market tightness has spurred faster nominal wage growth, particularly for low-paying jobs. Since the start of the pandemic, the increase in tightness alone is estimated to have directly increased overall nominal UK and US wage inflation by approximately 1.5 percentage points. In low-pay industries, this impact has been much greater, reflecting both above-average increases in labor market tightness and a stronger historical link between tightness and wage growth in these industries. So far, overall implications of increased tightness for wage inflation have been muted, partly because low-wage workers account for a relatively small share of firms’ total labor costs …

The World Economic Outlook report draws upon an IMF Staff Discussion Note, “Labor Market Tightness in Advanced Economies,” by Romain A Duval, Yi Ji, Longji Li, Myrto Oikonomou, Carlo Pizzinelli, Ippei Shibata, Alessandra Sozzi, and Marina M. Tavares (March 2022). That report offers more detail across countries. Here are a couple of its conclusions:

Most labor markets are tighter that they were prior to COVID-19. These include English-speaking (Australia, Canada, United Kingdom, United States) and several northern and western continental European economies, while Germany and Japan still show lower vacancy-to-unemployment ratios than in 2019. Vacancies have risen steadily across all sectors, including those with more contact-intensive, less teleworkable, and/or lower-skilled jobs that were hit hard by the pandemic. Fears that COVID-19 might permanently destroy these jobs, including through automation, have not materialized so far.

Tight labor markets partly reflect reduced labor force participation, which has shrunk the pool of available job seekers. The main reason why employment remains subdued, particularly compared to precrisis trends, is that disadvantaged groups—including, depending on countries, the low-skilled, older workers, or women with young children—have yet to fully return to the labor market. Looking through cross-country heterogeneity, in the median country, low-skilled workers—about one-fourth of whom are older workers— account for more than two-thirds of the aggregate employment gap vis-à-vis its preCOVID-19 trend, while older workers as group contribute about one-third of the gap. In some cases, the decline in immigration also seems to have amplified labor shortages among low-skilled jobs.

Some Economics of Supply Chains

Here’s an example of a global supply chain, which is all the more powerful for its ordinariness. It’s taken from the most recent Economic Report of the President, produced each year by the White House Council of Economic Advisers. Each year, the report is an overview of the economy from the viewpoint of academic economists who are admittedly and overtly sympathetic to the president in power, but also who feel a professional compulsion to provide facts and figures and tables to back up their beliefs. Chapter 6 of this year’s report is entitled “Building Resilient Supply Chains.” The CEA writes:

The M9 hot tub is made by Bullfrog Spas in Utah, where 500 workers assemble almost 1,850 parts from 7 countries and 14 states (see figure 6-i). The hot tub top shell starts as a flat acrylic sheet from Kentucky, which is then combined with a different type of plastic in Nevada and sprayed with an industrial chemical from Georgia. Parts of the frame shell of the hot tub are driven in by trucks from Idaho several times a week. Many of the electric motors come from China and are assembled into water pumps in Mexico and then driven to Utah. Additional material for exterior cabinets is transported from Shanghai on container ships through the ports of Long Beach or Oakland. Water-spraying jets are made in Guangzhou, China; are sent through the Panama Canal and Eastern ports to the supplier’s warehouse in Cleveland, Tennessee; and then are sent on to Utah. Once fully assembled, the finished hot tubs are placed on trucks or trains and delivered to retailer warehouses. This example illustrates both the extent of outsourcing, which increases the number of individual companies involved in the production of a single good, and the geographic distance traveled by each component, estimated to total nearly 900,000 miles, as well as the dependence on transportation and logistics this entails.

On one side, it seems pretty clear that Bullfrog Spas would not be providing jobs and payroll in Harriman, Utah, without some kind of far-flung supply chain. The great benefit of supply chains is the ability to combine specialized inputs from all over the world. On the other side, any company with far-flung supply chains will be vulnerable to disruptions of those supply chains.

The CEA report emphasized two main reasons for growth of global supply chains in the last three decades or so.

The first change is increased access to foreign suppliers, making offshoring more cost-effective for firms, largely due to advances in information technology (IT) and reductions in trade barriers since the 1990s. Advances in IT allow firms to convey detailed information about product and process specifications across long distances, while improvements in transportation, such as containerization, allow goods to be moved more quickly and consistently … The second key change is the growing role of financial criteria and institutions in corporate decisionmaking. This “financialization” of the economy has encouraged outsourcing and offshoring because of savings in costs that are easily measurable. … Such incentives have encouraged managers to focus more on these financial statement numbers than on less easily measurable metrics, such as resilience. … This financialization of the economy has been an important driver of U.S. lead firms’ supply chain strategies. Outsourcing of production and other capital-intensive activities is prescribed by consulting firms promoting an “asset-light” strategy. These firms note that, all else held equal, a lower amount of capital makes a given amount of revenue yield a higher measured return on assets …

As global supply chains have wobbled in the COVID pandemic, the potential tradeoffs have become more clear. The most obvious tradeoff is the potential for gains from a smoothly functioning supply chain vs. the risks and costs of a disrupted supply chain. But the CEA report mentions a number of more subtle tradeoffs, which often depend on the exact nature of the supply chain.

For example, the cost of building a cutting-edge semiconductor fabrication plant now exceed $12 billion, and so most users of computer chips buy rather than make their own. But is the best choice to rely on very standardized chips, which can be purchased from several manufacturers, or to work closely with just one or two fabs to design customized chips–and to reserve production space for those chips?

The use of semiconductors in the auto industry illustrates this point. Although semiconductors became key to the operation of modern vehicles more than a decade ago, many automakers did not begin to communicate directly with semiconductor manufacturers until late 2021. Rather, they bought chips indirectly, through distributors or first-tier suppliers, and did not commit to purchases more than a few weeks out. Thus, although their product plans included more intensive use of semiconductors in future vehicles, automakers had not been credibly signaling this intention to manufacturers. Without this commitment, semiconductor manufacturers were unwilling to build new fabs for automotive-grade chips, since fabs must maintain very high capacity utilization to be profitable. Further, they did not devote resources to innovating on the dimensions important to automakers, such as reduced cost and increased reliability. In contrast, Apple has long paid to reserve capacity in advance at fabs, and has worked with semiconductor manufacturers and design firms to innovate on the dimensions important to them—speed and power …

Another tradeoff related to innovation is that getting chips from a long distance away give a user access to the cutting-edge technologies. However, there is also a belief that when engineers from chip-makers and chip-users are geographically closer, both sides may benefit:

However, there is evidence suggesting that geographically separating production and innovation impedes innovation. Engineers overseeing production are exposed to the capabilities and problems of existing technology, helping them to generate new ideas both for improving processes and for applying a given technology to new markets. Losing this exposure reduces the opportunity to generate such innovative ideas. For example, when production of consumer electronics migrated to Asia in the 1980s, the United States lost the potential to later compete in the burgeoning market for follow-on products like flat-panel displays, LED lighting, and advanced batteries ..

Yet another tradeoff of supply chains is that a lead firm, purchasing inputs from others, may buy from a supplier firm that offers considerably lower wages and benefits, or perhaps also worse working conditions. Especially if the supplier is another US firm, this ability to contract out jobs to a separate workforce under a different employer seems like a mixed blessing for the US workforce viewed as a whole.

There are of course ways to reduce vulnerability to short-term supply chain disruptions: holding larger inventories, making sure you have a geographically distributed range of suppliers for key inputs, planning ahead for shifting to other inputs if needed, and so on. The problem is that these solutions have costs of their own–and at some point, the costs of avoiding vulnerability to a disruption mean that that the extended supply chain isn’t worth it in the first place. But many firms that rely on long supply chains had, frankly, not given a lot of thought to their vulnerability before the pandemic.

Government can sometimes play a role, too. One vicious circle in supply chain disruptions is that all the buyers are trying to build up their own inventories–which of course makes the shortage worse. In some cases, the government can help alleviate this hoarding by providing shared information about the market.

For instance, the U.S. Department of Health and Human Services has taken on an important role in providing an accurate demand signal for PPE. The department’s Supply Chain Control Tower receives near-daily data from distributors that represent more than 80 percent of the volume for the commodities it is tracking, along with supply status from 5,000 hospitals. This dashboard alleviates hospitals’ fear of shortages, so they do not need to incur extra costs of holding inventory. The dashboard also allows distributors to receive a truer demand signal by reducing excessive ordering that exacerbates supply constraints (U.S. Department of Health and Human Services 2022, 13). In cases such as these, the public sector is well positioned to collect, aggregate, and disseminate this information.

Another role for government is in setting the conditions for investment in infrastructure–for example, to reduce the risk that US ports become clogged. In a different chapter, the CEA notes:

There is much evidence that the United States lags far behind its competitors in supplying the essential inputs to economic capacity. U.S. infrastructure provides several examples. The World Economic Forum’s Global Competitiveness Report found in 2019 that, out of 141 countries, the United States ranked 13th in quality of overall infrastructure, 17th in quality of road infrastructure, 23rd in electricity supply quality, and 30th in reliability of water supply (Schwab 2019). A separate ranking of global ports by the World Bank and IHS Markit found that no U.S. port made it into the top 50 globally, and just 4 are in the top 100. By comparison, of the top 10 ports, several are in China. The Federal Communications Commission (FCC 2018) has also ranked the United States 10th among developed countries for broadband speed and connectivity. In transporting goods and services, in connecting workers around the country and globe, in transforming technological progress into productivity gains, the United States is not at the frontier.

Supply chains offer enormous economic benefits. But one hopes that a lasting economic lesson from the pandemic, for both private firms and government actors, is to think more seriously about the risks and vulnerabilities of supply chains.

The Case for Doubling US R&D Spending

“We massively underinvest in science and innovation, with implications for our standards of living, health, national competitiveness, and capacity to respond to crisis.” Benjamin F. Jones makes the case in “Science and Innovation: The Under-Fueled E/ngine of Prosperity.” It’s one of eight essays appearing in an e-book on Rebuilding the Post-Pandemic Economy (Aspen Economic Strategy Group, November 2021). Jones offers some vivid reminders that increases in standard of living–not just purely economic, but also other aspects like health–are closely related to investments in new technologies.

Real income per-capita in the United States is 18 times larger today than it was in 1870 (Jones 2016). These gains follow from massive increases in productivity. For example, U.S. corn farmers produce 12 times the farm output per hour since just 1950 (Fuglie et al. 2007; USDA 2020). Better biology (seeds, genetic engineering), chemistry (fertilizers, pesticides), and machinery (tractors, combine harvesters) have revolutionized agricultural productivity (Alston and Pardey 2021), to the point that in 2018 a single combine harvester, operating on a farm in Illinois, harvested 3.5 million pounds of corn in just 12 hours (CLASS, n.d.). In 1850, it took five months in a covered wagon to travel west from Missouri to Oregon and California, but today it can be done in five hours—traveling seven miles up in the sky. Today, people carry smartphones that are computationally more powerful than a 1980s-era Cray II supercomputer, allowing an array of previously hard-to-imagine things—such as conducting a video call with distant family members while riding in the back of a car that was hailed using GPS satellites overhead.

Improvements in health are also striking: Life expectancy has increased by 35 years since the late 19th century, when about one in five children born did not reach their first birthday (Murphy and Topel 2000). Back then, typhoid, cholera, and other diseases ran rampant, Louis Pasteur had just formulated the germ theory of disease, which struggled to gain acceptance, and antibiotics did not exist. In the 1880s, even for those who managed to reach age 10, U.S. life expectancy was just age 48 (Costa 2015). Overall, when examining health and longevity, real income, or the rising productivity in agriculture, transportation, manufacturing, and other sectors of the economy, the central roles of scientific and technological progress are readily apparent and repeatedly affirmed (Mokyr 1990; Solow 1956; Cutler et al. 2006; Alston and Pardey 2021; Waldfogel 2021).

Jones emphasizes some other gains from technology as well. For example, technology can offer flexibility in confronting various threats. Without decades of earlier research, COVID vaccines could not have been developed in less than a year after the pandemic hit: “Whether facing a pandemic, climate change, cybersecurity threats, outright conflict, or other challenges, a robust capacity to innovate—and to do so quickly—appears central to national security and national resilience.”

Moreover, it’s worth remembering that many countries in the rest of the world have active research and development efforts in many areas. The technology frontier is a moving target. The US will either stay near the lead in many of these areas, or fall behind.

In the mid-1990s, the United States was in the top five of countries globally in both total R&D expenditure as a share of GDP and public R&D expenditure as a share of GDP (Hourihan 2020). Today, the United States ranks 10th and 14th in these metrics, and U.S. public expenditure on R&D as a share of GDP is now at the lowest level in nearly 70 years. … By contrast, China has massively increased its science and innovation investments in pursuit of leading the world economically and strengthening its hand in global affairs. China’s R&D expenditure has grown 16% annually since the year 2000, compared to 3% annually in the United States. If China implements its current five-year plan, it will soon exceed the United States in total R&D expenditure.

Jones’s essay reviews the argument, fairly standard among economists, that a pure free market will tend to underinvest in new technologies, because in a pure free market the innovator will not capture the full value of an innovation. Indeed, if firms face a situation where unsuccessful attempts at innovation just lose money, while successful innovations are readily copied by others, or the underlying ideas of the innovation just lead to related breakthroughs for others, then the incentives to innovate can become rather thin, indeed. This is the economic rationale for government policies to support research and development: direct support of basic research (where the commercial applications can be quite unclear), protection of intellectual property like patents and trade secrets, tax breaks for companies that spend money on R&D, and so on.

A key insight is that many innovations build on other insights in unexpected ways. Here are a couple of vivid examples from Jones: the link from Albert Einstein to Uber, and the link from life in hot springs to genetic science.

Uber is a novel business model that has disrupted the transportation sector, and to the user Uber might appear as a simple mobile app enabling a new business idea. But Uber relies on a string of prior scientific achievements. Among them is GPS technology, embedded in the smartphone and in satellites overhead, which allows the driver and rider to match and meet. The GPS system in turn works by comparing extremely accurate time signals from atomic clocks on the satellites. But because the satellites are moving at high velocity compared to app users and experience less gravity, time is ticking at a different speed on the satellites, according to Einstein’s mind-bending theories of special and general relativity. In practice, the atomic clocks are adjusted according to Einstein’s equations, before the satellite is launched, to account exactly for these relativistic effects. Without these corrections, the system would not work. There is thus a series of intertemporal spillovers from Einstein to the GPS system to the smartphone to Uber (not to mention all the other innovations, mobile applications, and new businesses that rely on GPS technology) …

To study DNA, it must first be replicated into measurable quantities, and this replication process depends on many prior scientific advances. One critical if unexpected advance occurred in 1969, when two University of Indiana biologists, Thomas Brock and Hudson Freeze, were exploring hot springs in Yellowstone National Park. Brock and Freeze were asking a simple question: can life exist in such hot environments? They discovered a bacterium that not only survived but thrived—a so-called extremophile organism—which they named Thermus aquaticus. Like Einstein’s work on relativity, this type of scientific inquiry was motivated by a desire for a deeper understanding of nature, and it had no obvious or immediate application. However, in the 1980s, Kary Mullis at the Cetus Corporation was searching for an enzyme that could efficiently replicate human DNA. Such replication faces a deep challenge: it needs to be conducted at high heat, where the DNA unwinds and can be copied, but at high heat replication enzymes do not hold together. Mullis, in a Eureka moment, recalled the story of Thermus aquaticus, knowing that this little bacterium must be able to replicate its DNA at high heat given its environment. And indeed, Thermus aquaticus turned out to provide what was needed. Its replication enzyme was declared by Science Magazine to be the “molecule of the year” in 1989. Mullis would be awarded a Nobel Prize soon after, and the biotechnology industry would boom, opening new chapters of human progress.

When the spin-off effects to other discoveries and inventions are taken into account, the gains to research and development are enormous. What would you have been willing to pay for a COVID vaccine in early 2020? Jones says it this way:

Notably, these social returns are not just good: They are enormous. Effectively, the science and innovation system is akin to having a machine where society can put in $1 and get back $5 or more. If any business or household had such a machine, they would use it all the time. But this machine is society’s. The gains from investment largely accrue to others—not so much to the specific person who puts the dollar into the machine.

Of course, it’s impossible to know in advance exactly what ideas are going to be important, or what firms are going to be success stories. Indeed, one problem with relying on the private sector for R&D is that there is a tendency for firms to focus on the technologies that look most profitable in the short- or medium-terms, rather than building up a broad portfolio of knowledge that can be used in many ways. It’s important to let a thousand flowers blossom–because, if I can mix my metaphors, one of those flowers will grow into a mighty redwood. Jones says it more neatly: “In many ways, the vision of science and innovation needs to be the opposite of `picking winners.’ Rather, we need to `pick portfolios,’ with an emphasis on both increasing the scale of funding and human capital, and the diversity of approaches that are taken.”

Jones has lots of other points to make about technology and research–for example, although it’s widely believed that innovations are more likely to come from younger researchers, this does not actually seem to be true. But the bottom line is that when economists try to calculate the broad social returns to investing in research and development, it’s common to find estimates of annual returns in range of 40-50%. He argues that “a sustained doubling of all forms of R&D expenditure in the U.S. economy could raise U.S. productivity and real per-capita income growth rates by an additional 0.5 percentage points per year over a long time horizon.” And course, these economic gains don’t include the gains to health, or a greater ability to respond in crises, or the benefits of maintaining US global competitiveness.

Jones is also thoughtful in noting that national efforts at research and technology, and at applying those innovations in the broader economy, are not just a matter of budgetary appropriations. It’s necessary to expand the number of researchers and laboratories, which in turn means increasing the pipeline of people with the interests and skills to do that work, which in turn means reaching back to college and high school and elementary school–because someone who, say, leaves fourth grade without being able to do basic arithmetic is likely to have a much harder time becoming a researcher someday. This literature sometimes discusses the problem of “lost Einsteins”–those American children who never got the support and encouragement to develop their underlying abilities in math, science, and innovation.

Another part of the picture–and a faster way to expand US R&D than expanding the pool of students interested in these areas–is to encourage skilled immigration.

In a systematic study of inventors in the United States, Bernstein et al. (2019) examine the role of immigrants in U.S. invention. The central finding is that immigrants are especially productive in inventive activity. Not only do immigrants patent more often than U.S.-born individuals, but their patents are both more impactful for future invention and have greater market value. Overall, immigrants produce twice as many patents as one would expect from their population share. This is consistent more broadly with the STEM orientation of the immigrant workforce. While immigrants make up about 14% of the U.S. workforce, they account for 29% of the college-educated science and engineering workforce and 52% of science and engineering doctorates (Kerr and Kerr 2020). Overall, immigrants have accounted for about 30% of U.S. inventive activity since 1976 (Bernstein et al. 2019).

A similar picture emerges when examining entrepreneurship. Azoulay et al. (2021) study every new venture in the United States founded from 2007 through 2014 and examine whether the founders were born in the United States or abroad. They find that immigrants are 80% more likely to start a company than U.S.-born individuals. Moreover, immigrant founders are more likely to start companies of every size, including the highest-growth and most successful new businesses (see Figure 6).16 Indeed, looking at Fortune 500 firms today and tracing them back to their founding roots, one similarly finds a disproportionate role of immigrant founders—from Alexander Graham Bell to Sergey Brin to Elon Musk. A remarkable finding here is that immigrant-founded firms employ more people in total than there are immigrants in the U.S. workforce.

Interview with Daniel Kahneman: Noise and Decision-Making

There are some situations where you would hope or expect that experts in a certain area, when faced with the same fact pattern, would make similar decisions: for example, if several doctors separately examine the same patient, or several judges consider an appropriate sentence for the same convicted criminal, or several insurance underwriters consider the appropriate insurance premium for a certain commercial client. Of course, no one would expect identical decisions all the time, but one might hope or expect that in settings like these, the range of variation would be modest. However, it turns out that there is often a lot of “noise” in these decisions–meaning that real-world doctors, judges, underwriters, patent examiners, grant reviewers, teachers giving grades, and others often make very different decisions when facing what seem to be objectively similar fact patterns.

Sara Frueh interviews Daniel Kahneman in “Try to Design an Approach to Making a Judgment; Don’t Just Go Into It Trusting Your Intuition” (Issues in Science and Technology, Spring 2022). During the last few years, Kahneman has been focusing on this kind of “noise” in individual decision-making.

There is a fundamental conflict here. On one side, we often want experts to be able to see the big picture and the entire subject and to be alert for special circumstances that can and should make a difference. We want our experts to do individually customized decision-making. On the other side, different experts will be hypersensitive to different kinds of unique circumstances–often without even being self-aware about what facts or patterns are affecting their particular judgment. Thus, when experts have the freedom to customize their decisions, the result is discomfiting amount of “noise” in those decisions. There is no perfect answer here. But Kahneman argues that it is important to think about the process by which you would like to make the decision, and not just to rely on intuition. He says:

Given that, we do have ideas about procedures that are better than others, and the main example in my mind was a contrast between structured and unstructured hiring interviews. Unstructured interviews are when interviewers do what comes naturally. The structured interview breaks up the problems into dimensions, gets separate judgments on each dimension, and delays global evaluation until the end of the process, when all the information available can be considered at once. We know that neither structured nor unstructured interviews are very good predictors of success on the job, which is extremely difficult to predict. But within those limits, the structured interview is clearly better than the unstructured one.

If you think of decisions, then decisions involve options, and you can think of the options as similar to job candidates. That means that each option has attributes, and you want to assess those attributes separately. And we expect that approach to have the same kind of advantages that structured interviews have over unstructured interviews. So, the most important recommendation of decision hygiene is structuring. Try to design an approach to making a judgment or solving a problem, and don’t just go into it trusting your intuition to give you the right answer.

There are a number of controversies about whether this “structure” for decision-making can be written out in a clear way. Such a structure might be, for example, a written set of guidelines for judges that spell out how certain circumstances must (almost always) lead to certain criminal sentences. Or such a structure might be embodied in a computerized algorithm. Kahneman offers a qualified defense of creating structures for decision-making. As he says: “I have more confidence in the ability of institutions to improve their thinking than in the ability of individuals to improve their thinking.” When it comes to algorithms in particular, he argues:

Well, I think that there is widespread antipathy to algorithms, and it’s a special case of people’s preference for the natural over the artificial. In general we prefer something that is authentic over something that is fabricated, and we prefer something that’s human over something that is mechanical. And so we are strongly biased against algorithms. I think that’s true for all of us. Other things being equal, we would prefer a diagnosis to be made or a sentence to be passed by a human rather than by an algorithm. That’s an emotional thing.

But that feeling has to be weighed against the fact that algorithms, when they’re feasible, have major advantages over human judgment—one of them being that they are noise-free. That is, when you present the same problem to an algorithm on two occasions, you are going to get the same answer. So, that’s one big advantage of algorithms. The other is that they’re improvable. So, if you detect a bias or you detect something that is wrong, you can improve the algorithm much more easily than you can improve human judgment. And the third is that humans are biased and noisy. It’s not as if we’re talking of humans not being biased. The biases of humans are hidden by the noise in their judgment, whereas when there is a bias in an algorithm, you can see it because there is no noise to hide it. But the idea that only algorithms are biased is ridiculous; to the extent they have their biases, they learn them from people.

In addition, Kahneman points out that it is useful to separate the idea of using a structured approach to provide insight and background into making a decision, and the actual process of making the decision itself. There can be “deal-breaker” facts that cause a decision-maker to reach outside the structure that has been set up–but that doesn’t mean the structure isn’t a useful starting point.

[Y]ou really want to create a distinction between the final decision and the process of creating that decision. And in the process of creating a decision, diversity is a very good thing. When you’re constructing a committee to make decisions—whether of hiring or of strategy—you do not want people to come from exactly the same background and to have the same inclinations. You want diversity. You want different points of view represented, and you want different sources of knowledge represented. In some occasions increasing diversity in the making of the decision could reduce noise in the decision itself.

The real deep principle of what we call decision hygiene is independence. That is, you want items of information to be as independent of each other as possible. For example, you want witnesses who don’t talk to each other, and preferably who saw the same event from different perspectives. You do not want all your information to be redundant. So, good decisions are decisions that are made on the basis of diverse information.

Some Economics of the Black Death

“In October 1347 ships arrived in the Sicilian port of Messina carrying Genoese merchants from the Crimean Port of Kaffa. In addition to their cargoes, they carried a deadly new disease. Over the next five years, what would come to be known as the Black
Death spread across Europe and the Middle East killing between 30 percent and 50 percent of the population …”

This is the opening of a review essay by Remi Jedwab, Noel D. Johnson, and Mark Koyama about “The Economic Impact of the Black Death” (Journal of Economic Literature, 2022, 60:1, 132-78, need library or personal subscription to access). Of course, public health data for he late 1300s is on the sketchy side. But to give a sense of how this research is done, here’s a map showing 274 locations with data about mortality rates during the Black Death.


With this data, one can extrapolate the likely patterns of Black Death across Europe. With the added dimension of time, one can then look at places where Black Death arrived sooner or later, and where its effect was relatively mild or awful.

For an economist, prone to looking at outcomes in markets, perhaps the most obvious prediction is that in the short run, the Black Death should lead to severe disruptions of agriculture and trade that reduce wages for most people A more subtle shift was that with widespread shortages and those who lived being relatively flush with money, inflation rose. A number of European countries reacted to higher inflation by enact laws against wage increases.

In England, the Statute of Laborers was passed in 1349 and imposed strict limitations on nominal wages. These restrictions were highly effective in limiting nominal wage growth during the 1350s, though of declining effectiveness thereafter. In France, a comparable statute was passed in 1351 regulating wages, prices, and guild admittances. Similar restrictions were imposed elsewhere in Europe (Heckscher 1955). In Florence, wages were
permitted to rise for urban workers, but not for rural laborers who saw their real wages
fall as they had to purchase basic commodities at “hyper-inflated prices” (Cohn 2007a,
p. 468). Individuals who left their farms to seek new work were fined.

In the middle run of several decades, however, after economies had a chance to adjust to the worst of the disruptions from the Black Death, the reduction in population meant a reduced supply of workers. More rural workers emigrated to the cities, which had space and jobs. Ther was more land per agricultural worker. All of this led to substantial growth in real wages and income by around 1400.

In addition, the combination of higher wages for workers and deaths among the nobility had a leveling effect on the distribution of income. One quirky consequence was that as more average people were able to afford colorful and distinctive clothing, countries began to pass “sumptuary laws,” which only the nobility were allowed to wear silk, or gold buttons, or certain colors, or to serve two meat courses at dinner. The authors write:

Finally, one institutional response to these leveling effects were sumptuary laws—laws restricting dress based on social status. These began to emerge in Europe in the thirteenth century. However, following the Black Death, they proliferated across Europe. This can be understood as a response to the increased spending power of workers following the plague. Arguing that sumptuary laws were an indication of economic leveling, Scheidel (2017, 206) contrasts an English sumptuary law from 1337 that restricted fur to the nobility to one post-
plague that was a response to “growing mass affluence and eroding status barriers.” For example, legislation in England in 1363 notes that the lower orders were now dressing in finer and more colorful clothes. Desierto and Koyama (2020) examine this systematically using newly collected data on sumptuary legislation and a formal model. Their model rationalizes sumptuary legislation as an attempt by elites to repress status competition from
below.

There are many other indirect effects of the Black Death in the longer term. The authors write:

{T]he impact of the Black Death interacted with institutional and cultural developments. Several important institutional developments stand out: (i) serfdom went into decline in western Europe as a direct consequence of the demographic crisis inaugurated by the Black Death (Bailey 2014) and this had important long- run consequences (Gingerich and Vogler 2021); (ii) the regime of high age at first marriage and constrained fertility was strengthened (De Moor and van Zanden 2010, Voigtländer and Voth 2013a); (iii) stronger, more cohesive,
territorial states emerged in western Europe; and (iv) there was a weakening in the political power of religious organizations. All of these factors have been argued as playing a role in the economic rise of Europe, and specifically northwestern Europe.

One of the puzzles of global economic history is the “Great Divergence,” which is the pattern by which the economies of Europe soon after the Black Death begin to grow substantially faster than the economies of Asia and the Middle East, which had previously been the world leaders. Of course, many factors were at work. But ironically, one contributor seems to have been the disruptions in economic, social, and political patterns caused by the Black Death.

Women In Parliaments Around the World

The Inter-Parliamentary Union has released its Women in Parliament in 2021: The Year in Review report, a systematic count of the share of women in national legislatures around the world. Women have been increasing their share of the member of national legislatures and parliaments over time. Here’s the overview from 1995 up through 2021.

When it comes to chairing aa legislative committee, women hold 27% of the seats–similar to their overall representation. However, the distribution of these chairs across type of committee is skewed.

Much of the report is a country-by-country overview of changes in the last year or or two. But overall, what causes some countries to have substantially higher shares of women than others? Cultural and historical factors surely play a role, but the report emphasizes some institutional factor as well. For example, a number of countries have a quota requirement: for example, requiring that each party must have a certain proportion of women among its legislative candidates.

As in previous years, quotas appeared to be the most critical factor in determining women’s representation in 2021. Among the 30 countries that had some form of quota system in place for the single or lower house, 31.9 per cent women were elected. This varied a little based on the type of quota – countries with legislated quotas elected 31.8 per cent women on average, and those with only voluntary quotas adopted by political parties elected 32 per cent
women. On the other hand, only 19.5 per cent women were elected in lower or single houses in countries with no form of legislated or voluntary quotas.

Another institutional factor that seems to matter here is that countries where elections are by majority vote in a district seem to have a smaller share of women legislators than those with proportional voting–that is, where voters cast their ballot for a party, parties receive seats in the legislature according to how many seats they receive, and parties (mostly) determine who will fill those seats.

This report makes no effort to look at differences in how men and women vote when in legislatures. But the underlying facts seem worth noting.