The Pandemic Effect on World Trade: Some Early Data

The World Trade Statistical Review 2020 from the World Trade Organization is an annual report which is mainly focused on detailed data for trade patterns from the previous year. I find that it requires some mental effort to remember what the world economy looked like in 2019, but of course, trade tensions were already high. The value of global merchandise trade fell 3% in 2019–the first time it had fallen since the Great Recession years back in 2008-9–while the value of services trade rose 2.1%. However, this year\’s report also includes some preliminary data on how aspects of global trade have evolved since the COVID-19 pandemic hit earlier this year. For example: 

One early indicator of trade are based on data from purchasing managers and the new orders they have received for goods that will be exported. The sharp fall, and then the rebound, suggest that the level of these orders may have bottomed out in March or April, and that a recovery in actual exports may be perceptible in the June data. 
Here\’s a measure of the quantity of container shipping. Notice that the fall in the last few months is not (yet) as deep or severe as the decline during the Great Recession.
As one might expect, the number of commercial flights plummeted, falling by something like three-quarters in March 2020.
Tourism and travel is a major elements of international trade: for example, when a foreign traveler in the US spends money on US goods and services, it is treated in the trade statistics as an \”export\” of US production to a foreign consumer. The US typically runs a surplus in travel industry, with exports (blue line) well above imports (gray line), but both have dropped substantially in early 2020.  
Finally, here are a couple of figures comparing national-level data on exports in April 2020 and on imports in March 2020 to the monthly data for a year earlier. 

Those who believe that international trade is bad for the US economy should of course welcome the 2020 fall in trade as a silver lining in what is otherwise shaping up to be a dismal year for the economy. Or alternatively, they might reconsider the extent to which trade is the fundamental source of US economic problems or reducing trade is a useful solution to those problems.   

Slavery and the History of US Economic Growth

Slavery was both a set of economic arrangements and also a raw authoritarian human rights violation. It\’s unsurprising that there has been long-standing controversy over the relationship: for example, did slavery in the United States boost to economic growth or hold it back? Gavin Wright revisits these issues in \”Slavery and Anglo‐American capitalism revisited\”  (Economic History Review, May 2020, 73:2, pp. 353-383, subscription required).  The paper was also the subject of the Tawney lecture at the Economic History Society meetings in 2019, and the one-hourlecture can be freely viewed here.

Wright frames his discussion around the \”Williams thesis,\” based on the 1944 book Capitalism and Slavery, focused on the United Kingdom. Williams argued that while slavery played an important role in British capitalism in the 18th century–in particular, the brutalities of slave labor were central to production of sugar and thus to Britain\’s international trade–by early in the 19th century the British economy and exports had evolved toward manufacture of industrial products, in such a way that slave labor was no longer vital.  Wright argues that as the US economy of the 19th century evolved, slavery tended to hold back US economic growth. 
To set the stage, let\’s be clear that the economic activity of slavery was deeply entangled with capitalism. Wright offers an example that will resonate with those of us working in higher education:

The prominence of slave-based commerce for the Atlantic economy provides the background for the arresting connections reported by C. S. Wilder in his book Ebony and ivy, associating early American universities with slavery. The first five colleges in British America were major beneficiaries of the African slave trade and slavery. ‘Harvard became the first in a long line of North American schools to target wealthy planters as a source of enrollments and income’. The reason for what might seem an incongruous liaison is not hard to identify: ‘The American college was an extension of merchant wealth’. A wealthy merchant in colonial America was perforce engaged with the slave trade or slave-based commerce.

However, as numerous writers have pointed out over time, the coexistence of slavery with British and American capitalism of the 17th century does not prove that slavery was necessary or sufficient for an emerging capitalism. As many writers have pointed out, historical slavery across what we now call Latin America. At that time, Spain and Portugal (among others) were also active participants in the slave trade, yet their economies did not develop an industrial revolution like that of the UK. Countries all over Latin America were recipients of slaves, like the area that became the US, but those countries did not develop a US-style economy.  Clearly, drawing a straight line from slavery to capitalism of the Anglo-American variety would be wildly simplistic. 

Wright argues that slavery did seem essential to sugar plantations: \”Sugar plantations required slave labour not because of any efficiency advantage associated with that organizational system, but because it was all but impossible to attract free labour to those locations and working conditions.\” But Wright argues that when it came to cotton (or tobacco or other crops), slavery did not have any particular advantage over free labor. Thus, US cotton plantations run by slave labor did not come into being because they had an economic advantage, but rather because slaveowners saw it as a way to benefit from owning slaves. 
The Atlantic economy of the eighteenth century was propelled by sugar, a quintessential slave crop. In contrast, cotton required no large investments of fixed capital and could be cultivated efficiently at any scale, in locations that would have been settled by free farmers in the absence of slavery. Early mainland cotton growers deployed slave labour, not because of its productivity or aptness for the new crop, but because they were already slave owners, searching for profitable alternatives to tobacco, indigo, and other declining crops. Slavery was, in effect, a ‘pre-existing condition’ for the nineteenth-century American South.
It\’s true that a lot of pro-slavery writers in the 1850s boasted that cotton was essential to the US economy, as a way of arguing that their own role as slave-owners was also essential. But slave-holders also argued that wage labor was exploitative and slavery represented true Christian morality and the Golden Rule. Rather than listening to the explanations of those trying to justify evil, it\’s more useful to look at what actually happened in history. If it was true that slave-produced cotton was essential to US economic growth,, then end of slavery should have wiped out US economic growth. But it didn\’t. Wright points to some research literature looking back at the US economy in the 1830s:  \”Cotton production accounted for about 5 per cent of GDP at that time. Cotton dominated US exports after 1820, but exports never exceeded 7 per cent of GDP during the antebellum period. The chief sources of US growth were domestic. …  [The] cotton staple growth theory has been overwhelmingly rejected by economic historians as an explanation for US growth in the antebellum era.\”

Similarly, if it was true that slave plantations were the  most efficient way of growing cotton, then the end of slavery should have caused the price of cotton to rise on world markets. But it didn\’t. 
The best evidence that slavery was not essential for cotton supply is what happened after slavery’s demise. The wartime and postwar years of ‘cotton famine’ were times of great hardship for Lancashire, only partially mitigated by high-cost imports from India, Egypt, and Brazil. After the war, however, merchants and railroads flooded into the south-east, enticing previously isolated farm areas into the cotton economy. Production in plantation areas gradually recovered, but the biggest source of new cotton came from white farmers in the Piedmont. When the dust settled in the 1880s, India, Egypt, and slave-using Brazil had retreated from world markets, and the price of cotton in Lancashire was back to its antebellum level … 
Again, slave labor on US cotton plantations was for the benefit of the slaveholders, not the US economy as a whole. Indeed, as the 19th century evolved, the US South consistently underperformed as a cotton supplier. Wright points out three reasons. 

First, \”[t]he region closed the African slave trade in 1807 and failed to recruit free labour,
making labour supply inelastic.\” Why were slaveowners against having more slaves? As Wright points out: \”After voting for secession in 1861 by 84 to 14, the Mississippi convention voted down a re-opening resolution by 66 to 13. The reason for this ostensible contradiction is not difficult to identify: to
re-open the African trade was to threaten the wealth of thousands of slaveholders across the South.\” In short, bringing in more slaves would have reduce the price of existing slaves–so existing slaveowners were against it. In addition, immigrant to the US from, say, 1820 to 1880 overwhelmingly went to free states. Slave states in the southwest \”displayed net white outmigration, even during cotton booms, at times when one might have expected a rush of immigration. One result was low population density and a level of cotton production well below potential.\”

Second, \”[s]laveholders neglected infrastructure, so that large sections of the antebellum South were bypassed by the slave economy and left on the margins of commercial agriculture.\” The middle of the 19th century was a time when the US had a vast expansion of turnpikes, railroads, canals, and other infrastructure often built by state-charted corporations. However, almost all of this contruction occurred in the northern states. Not only were the southern states uninterested, they actively blocked national-level efforts along these lines: \”Over time, however, the slave South increasingly assumed the role of obstructer to a national pro-growth agenda. ,,, [S]outhern presidents vetoed seven Rivers & Harbors bills between 1838 and 1860, frustrating the ambitions of entrepreneurs in the Great Lakes states.\”

Third, \”the fixed-cost character of slavery meant that even large plantations aimed at self-sufficiency in foodstuffs, limiting the overall degree of market specialization.\” One main advantage of slavery in cotton production was that it guaranteed having sufficient labor available at the two key times of the year for cotton: planting and harvesting. But during the rest of the year, most cotton plantations grew other crops and raised livestock

The shortcomings of the South as a cotton producer during this time were clear to some contemporary observers. Wright says: \”Particularly notable are the views of Thomas Ellison, long-time chronicler and statistician of cotton markets, who observed in 1858: `That the Southern regions of the United States are capable of producing a much larger quantity of Cotton than has yet been raised is very evident; in fact, their resources are, practically speaking, almost without limit’. What was it that restrained this

potential supply? Ellison had no doubt that the culprit was slavery …\” 

In short, the slave plantations of the American South were a success for the slaveowners, but not for the US economy. From a broader social perspective, slavery was a policy that scared off new immigrants, ignored infrastructure, and blocked the education and incentives of much of the workforce. These policies are not conducive to growth. As Wright puts it: \”\”Slavery was a source of regional impoverishment in nineteenth-century America, not a major contributor to national growth.\”

A Gentle Case for Paying Kidney Donors

Simon Haeder makes a gentle case, nudging the undecided to consider the possible of paying kidney donors, in \”Thinking the Unthinkable: Buying and Selling Human Organs\” (Milken Institute Review, Third Quarter 2020, pp. 44-52).
Today, 15 percent of Americans suffer from chronic kidney disease. Of these, roughly 800,000 have progressed to end-stage renal disease, where kidney function has been reduced to 10 to 15 percent of normal capacity. Most of them – half a million or so – require regular dialysis, and eventually a transplant, to survive.

Dialysis sustains life, yet it is far from a perfect substitute for normal kidney function. It is a time-consuming process that often leaves patients fatigued, with increased risks of infection and sepsis, and subject to a number of other ailments. What’s more, dialysis is very expensive, with an average annual cost of $90,000 that is largely underwritten by government. In 2018 alone, Medicare spent $114 billion on chronic kidney disease patients, with the end-stage renal disease population, which makes up a meager 1 percent of the total Medicare population, accounting for more than $35 billion. And this figure does not include spending by private insurers or patients’ out-of-pocket payments.

Kidney transplantation is superior to dialysis in every way. It not only increases the quality of life for patients, but also substantially decreases long-term costs of care for patients with ESRD [end-stage renal disease]. All told, a kidney transplant is worth on the order of a half-million dollars to kidney disease sufferers and those who share the cost of dialysis. Transplants are also head and shoulders above dialysis in terms of life expectancy. While the five-year survival rate for end-stage renal disease is 35 percent, it increases to 97 percent for those receiving transplants.

One unsurprising result of the explosion in end-stage renal disease is that kidney procurement has consistently failed to provide enough organs for transplants. The waiting list for kidneys has ranged from 76,000 to 87,000 over the past decade, as more than 20,000 individuals are added to the rolls each year. And with demand increasing at around 10 percent annually, a lot of those in need are just out of luck. On average, 13 people die each day waiting for kidneys (and another seven die waiting for other organs). It is highly unlikely that more effective appeals to the kindness of others will solve the shortage long term. It certainly hasn’t so far.

The arguments for and against paying kidney donors have become fairly well known over time. Haeder runs through them clear, and there\’s no need to belabor them here. But for example: 
Yes, it would be nice if there was a surge in voluntary donations. But it hasn\’t happened, in the US or anywhere. So people keep dying for lack of a kidney transplant. 

Yes, there are hard questions about the incentives involved with paying for kidney donations, but the hard questions cut in both directions. Haeder writes: 

The very idea of putting prices on body parts infuriates many by besmirching the ideal of altruistic donations. Of course, the altruism in the current transplantation process stops with the donor, the recipient and their families – everyone else is getting paid. Moreover, proponents rightfully point out that we already allow compensation to individuals for donations of blood plasma and for providing surrogate motherhood services, so the expansion to organs would be a change in degree only.

There are concerns that paying to donate a kidney would exploit the poor. But it does require a bit of fancy philosophical footwork to argue that you must deny someone an option to be paid a large sum of money so as to avoid \”exploiting\” them. Moreover, there are many aspects of society, like paying to take jobs that have greater risk of injury or death,  that \”exploit\” the poor in the same sense. Haeder notes: 

We have long been perfectly willing to exploit the poor by paying them to enroll in potentially dangerous prescription drug trials – and, most importantly, by encouraging them to put their lives on the line by joining the military.

Moreover, there are incentives for large and profitable private-sector companies that provide dialysis treatments to lobby against paying kidney donors–because a rise in kidney transplants would cut into their profits. If one chooses to be uncharitable in the motives we impute to the other side of this debate, one could point out that those who are against paying kidney donors are on the side of big for-profit dialysis companies and against making direct payments to individual kidney donors who may happen to be poor. 

For previous posts with relevance to the intersection of economics, incentives, and paying for kidney transplants, or paying for blood, plasma, bone marrow, and breast milk, see: 

Federal Reserve Assets Explode in Size (Again)

The Federal Reserve is reinventing itself in plain sight, again. Here\’s a figure from the Fed website, \”Recent Balance Sheet Trends.\”  From the beginning of March through July 20, total assets held by the Fed rose $2.3 trillion, from $3.9 trillion to $6.2 trillion, as shown by the top blue line. (The vertical scale on the figure shows millions of millions–that is, trillions.)
The description of the pattern at the Fed website sounds like this: 

The size and composition of assets held by the Federal Reserve has evolved noticeably over the past decade. At the onset of the financial crisis [in 2008], the level of securities held outright declined as the Federal Reserve sold Treasury securities to accommodate the increase in credit extended through liquidity facilities. Though the various liquidity facilities wound down significantly over the course of 2009, the level of securities held outright expanded significantly between 2009 and late 2014 as the FOMC conducted a series of large-scale asset purchase programs to support the U.S. economy. Then, securities held outright declined as a result of the FOMC\’s balance sheet normalization program that took place between October 2017 and August 2019.

The orange line shows that most of the increase is due to increased holdings by the Federal Reserve of \”Securities Held Outright.\” A more detailed breakdown of the Fed balance sheet as of July 23 shows that in the last year (and mostly in the last few months), Fed holdings of Treasury securities have risen $2.1 trillion, while holdings of mortgage-backed securities have risen more than $400 billion. By comparison, Fed holdings of corporate bonds and short-term commercial paper are relatively small.
The bump at the lower right-hand side of the figures shows \”liquidity facilities,\” which are ways in which in the Fed makes short-term loans to key players in financial markets so that during a time of severe financial and economic stress, the markets don\’t lock up for lack of short-term funding. These loans were as high as $500 billion in April and May, but are now down to $150 billion and falling. 
In short, when people wonder about the source of the money for the enormous US budget deficits in recent months during the COVID-19 pandemic, one answer is that the US savings has nearly quadrupled in the last few months, as opportunities to spend contracted and as people worried about the size of their personal nest eggs. In one way or another (say, via a money market fund or bank account),  a chunk of this money was flowing into government borrowing. The other main part of the answer is that the Federal Reserve is buying federal debt, and in that way is financing the US government support of the economy.
At the worst of the Great Recession, in October and November 2008, the Fed increased the assets it was holding by about $1.2 trillion.  For comparison, during the three months from March to May 2020, the Fed increased the assets it was holding by $3 trillion–more than double what it did in the heart of the Great Recession.  
The Fed took one large step to transforming itself back in the Great Recession of 2007-2009, when it shifted to a sustained policy of long-term asset purchases, often known as \”quantitative easing.\” In 2014, the Fed decided to slowly taper off from that policy, and so Fed assets decline modestly through late 2019.  The Fed is now transforming itself again.   
 The COVID-19 recession is an extraordinary economic shock, well beyond what even a prudent household or business would have planned for. It\’s difficult and a little unseemly to criticize an emergency response, and I won\’t do so. But even when emergency responses are justified, that doesn\’t make the costs go away. And by definition, an emergency response is not intended to be sustained for long periods of time. 

An Update Concerning the Economics of Lighthouses

Lighthouses have been a canonical example for economists–but what that example is intended to illustrate has shifted dramatically over time. 

The lighthouse example was original used by economists as an example of a situation where government provision of a good was necessary, because lighthouses could not easily impose charges on ships passing at sea, and so a private firm could not earn a profit by investing to build a lighthouse. This example dates back at least to John Stuart Mill\’s 1848 Principles of Political Economy. In Book V, Chapter XI, \”Of the Grounds and Limits of the Laisser-faire or Non-Interference Principle,\” Mill writes: 

[I]t is a proper office of government to build and maintain lighthouses, establish buoys, &c. for the security of navigation: for since it is impossible that the ships at sea which are benefited by a lighthouse, should be made to pay a toll on the occasion of its use, no one would build lighthouses from motives of personal interest, unless indemnified and rewarded from a compulsory levy made by the state.

The lighthouse example was then cited by a succession of prominent authors as an example where government action was needed because markets could not function, including repeated mentions in Paul Samuelson\’s classic introductory textbook that defined economics pedagogy for most of the second half of the 20th century (and arguably since then, too). 
But Ronald Coase reversed this argument in his classic essay \”The Lighthouse in Economics\” (Journal of Law and Economics, October 1974, 17:2, pp. 357-376). He discussed the use of the lighthouse example by Mill, Samuelson, and others. But Coase then pointed out that however much these earlier arguments appealed to intuition, as a matter of actual historical fact, many British lighthouses in the 17th and 18th centuries were in fact built and run by private companies. 
The common process was that private investors would petition the Crown for a \”patent\” to build a lighthouse in a certain location. The petition was signed by local ship-builders and ship-owners, who pledged that they were willing to help pay for the lighthouse. \”The King presumably used these grants of patents on occasion as a means of rewarding those who had served him. Later, the right to operate a lighthouse and to levy tolls was granted to individuals by Acts of Parliament.\” Ships that arrived in nearby ports where then charged tolls. In other cases, Trinity House–an institution dating back to the medieval guilds which had authority to regulate pilotage and shipping–would apply for a lighthouse \”patent,\” and then lease the patent to a private individual who would provide the money for building the lighthouse and receive the funds. 
All markets rely on enforcing contracts and property rights. Coase argued that in the case of British lighthouses of this earlier time period: 

The role of the government was limited to the establishment and enforcement of property rights in the lighthouse. … [E]conomists wishing to point to a service which is best provided by the government should use an example which has a more solid backing.

Of course, one might quarrel that when it comes to the distinction between the role of government and markets in lighthouses, Coase is splitting hairs. This is a case where government is granting a right to set  up what is in effect a local monopoly, and where government often also played a role (through customs agents) in collecting tolls in the ports. This is clearly not a multi-competitor free market in action.  But on the other side, Coase did show that the earlier argument that government itself needed to provide lighthouses was oversimplified. The British government back in the 17th and 18th century was not choosing sites for the private lighthouses, nor was it financing their construction. Moreover, the lighthouses were private property: those who owned lighthouses could sell them, or bequeath them to their heirs. In modern terminology, Coase is saying that a public-private partnership, in which the private partner is responsible for investment but is also able to make a profit, is not the same thing as outright government provision. 
There have been ongoing arguments since the 1974 Coase essay concerning how to think about the public and private roles. But there has been less discussion of how the changing role of technology might reshape the lines between public and private. Theresa Levitt argues that Coase was correct to point out the role of the private sector in lighthouses of the 17th and 18th centuries. However, she argues that changes in lens technology, greatly altering the cost and power of lighthouses, was an economic change leading back to the old wisdom that government provision of lighthouses was necessary. Her essay is \”When Lighthouses became Public Goods: The Role of Technological Change\” (Technology and Culture, January 2020, 61:1, pp. 144-172). She writes: 

The crucial technological change was in the illumination apparatus, with the introduction of mirrors in the 1780s and Fresnel lenses in the 1820s. This was not only a change in technical performance, as each development increased the brightness by more than an order of magnitude. It also brought about the sort of social and institutional transformations that historians of technology have identified as a technological system. As lighthouses became reliably visible at safe distances for sea-coast lighting the first time, their purpose and function changed, as well as their costs and financing. The lighthouse system of the seventeenth century discussed by Coase was fundamentally different from that of John Stuart Mill and Paul Samuelson, with different expectations, expenses, and implications for excludability. While a market could support the lights that existed before 1780, which were primarily effective at close range, it could not support the transformed system that emerged in the wake of improved illumination. Nor could the market provide for the technological improvements, with no private owners of lighthouses investing in Fresnel lenses, one of the key improvements. Only after England introduced greater state intervention did the lights improve.

The private lighthouses of the 17th and 18th centuries mostly burned candles or coal, and could be seen for no more than about five miles. But in the late 18th century and into the 19th century, there was a wave of innovation involving oil lamps for illumination and lenses to focus the light out to sea. But Levitt argues that the breakthrough innovation was the Fresnel lens in 1820. France had abolished private lighthouses in 1792, and instead used a government Lighthouse Commission. This government commission hired August Fresnel to design and install this new light, which was essentially visible all the way to the horizon. Levitt describes the next step: 

The French Lighthouse Commission placed Fresnel in charge of a massive overhaul of the French lighthouse system known as the Carte des phares. … Instead of focusing on port entrances alone, the plan was now to form a rational network which would illuminate the entire coast, so that whenever a ship went out of sight of one lighthouse, it would already be entering into sight of another. … Fresnel divided the lights into different orders based on their size, with the largest, first-order lights warning of a ship’s first approach to the coast, second-order lights aiding in the navigation of tricky passages, and the smaller “feux de port” marking port entrances. …

One of the key attributes of the system was that it would allow mariners to distinguish between the lights, and thus know their precise location at all times. Fresnel proposed three distinct light signatures: a rotating light of eight bulls-eye panels that flashed every minute, a rotating light of sixteen panels that flashed every thirty seconds and a fixed light that gave out a continuous beam . The rotating lenses were mounted on columns that were turned first by an escapement mechanism, then by chariot wheels, and finally on a mercury bath (an idea conceived by Fresnel but not put into practice until the 1890s). The parabolic apparatus could also be rotated to produce a distinct flashing light. But Fresnel went one step further: carefully arranging the various lights so that no two similar ones were alike, and, with the visibility of lights overlapping, a sailor would be able to identify their location with precision. Jules Michelet commemorated the completion of the project in 1854 with the phrase, “For the sailor who steers by the stars, it was as if another heaven had descended to earth.”

Levitt makes a case that when John Stuart Mill was writing about lighthouses back in 1848, he was well aware of these changes. It turns out that one of Mill\’s childhood teaches was a British leader in efforts for upgrading Britain\’s lighthouses, which for a long time lagged behind those of France. The US soon adopted the French approach. 

The United States had leapfrogged over Britain as well after adopting the French model of lighthouse provisioning. The colonies built ten lighthouses under British rule …  The federal government appropriated them in 1789 in the first application of the Commerce Clause. Substantial investment in infrastructure only came in the 1850s, however, when a coalition of sailors, scientists, and engineers demanded the creation of a lighthouse board modeled on the French Lighthouse Committee. This board effected a massive, tax-funded program of new building and updated technology, and by the end of 1859 the United States had more than twice as many lighthouses as the British, virtually all of them equipped them with Fresnel lenses.

Thus, the history of lighthouses suggests a lesson that for focused and local projects with static technology, a public-private partnership can work well. But for development and investment in a new technology which will be applied to an interconnected national network where it is difficult to charge end-users for value received, government may usefully play a larger role. Levitt writes: 

This fact can also help us understand the lighthouse’s transformation into a public good. When lights simply marked harbors, one could charge every ship that entered the harbor. But when a light marked some empty, desolate stretch of coast, it was not so easy to charge whoever happened to pass by. Rather than being “plucked from the air,” Mill’s position accurately reflected the new situation. … A more detailed study of lighthouse administration supports the status of the lighthouse as a public good: the private market failed to either develop or invest in the technology necessary to establish the effective sea-coast lights now associated with the term “lighthouse.”

The Closing of the (Urban) Frontier

Much of the history of the United States for its first century is a story of expansion from its original foothold on the eastern seaboard across the North American continent. By 1890 or so, that expansion was over. The historian Frederick J. Turner wrote about this shift in a famous 1893 essay, \”The Significance of the Frontier in American History.\”  He began: 

In a recent bulletin of the Superintendent of the Census for 1890 appear these significant words: “Up to and including 1880 the country had a frontier of settlement, but at present the unsettled area has been so broken into by isolated bodies of settlement that there can hardly be said to be a frontier line. In the discussion of its extent, its westward movement, etc., it can not, therefore, any longer have a place in the census reports.” This brief official statement marks the closing of a great historic movement. Up to our own day American history has been in a large degree the history of the colonization of the Great West. The existence of an area of free land, its continuous recession, and the advance of American settlement westward, explain American development.

America\’s narrative of opportunity and mobility shifted at the tail end of the 19th century. In the 20th century, the narrative focused much less on a rural frontier, and instead was a story of moving to the city, where the opportunities to raise one\’s economic status and social status. But over the last 50 years or so, Edward L. Glaeser argues, we have in effect seen \”The Closing of America’s Urban Frontier\” (Cityscape: US Department of Housing and Urban Development, 2020, 22:2, pp. 5-21). 

As Glaeser points out that the end of the western frontier was an important cultural event. But even back in the decades before Turner was writing, a relatively small share of the US population was actually made up of settlers. The land that mattered most for US economic growth and was not plots of ranchland in wide-open western states, but instead the areas surrounding the urban cores of that time, what Glaeser calls the \”open urban frontier.\” The overwhelming population movement of the late 19th and early 20th century was from rural to urban areas, often in search of economic opportunity. Glaeser writes (citations omitted): 

In a sense, America’s urban frontier became more open during Turner’s lifetime [1861-1932] because the traditional downsides of urban crowding, such as contagious disease, became less problematic. The urban frontier remained largely open during the dynamic 25 years that followed World War II. African-Americans migrated north by the millions to flee the Jim Crow South and take advantage of urban industrial jobs. Americans built new car-oriented cities in Sun Belt states like Arizona and Texas. The movement of people and firms diminished the vast income differences that once existed between locations.

Sometime around 1970, the urban frontier began to close. Community groups mobilized and opposed new housing and infrastructure. Highway revolts slowed urban expansion in car-oriented suburbs. Historic preservation made it more difficult to add new density in older cities. Suburbs crafted land-use restrictions that stopped new construction. While some productive Sun Belt cities still permitted significant amounts of new housing, even those one-time refuges of affordable urbanism had begun to be more restrictive. …

Migration has fallen dramatically over the past 20 years, and poorer migrants no longer move disproportionately to richer places. Housing costs have risen sharply in more productive places, which has generated a wealth shift from the young to the old. Income convergence across regions has stalled.  … America’s growing geographic sclerosis makes it increasingly difficult for out-migration to solve the problems of local joblessness. …

The closing of America’s urban frontier seems to be a far more significant event in American economic history than Turner’s motivating fact that “the unsettled area has been so broken into by isolated bodies of settlement that there can hardly be said to be a frontier line.” Vast amounts of the American West were unpopulated in Turner’s time and remain so today. Cheap land could still be had for homesteading in 1893, and there remains plenty of inexpensive ranchland today for anyone who wants the rugged life of a mid-19th century frontiersman. By contrast, the high price of accessing America’s most productive urban areas today is an important fact of life for tens of millions of Americans.

Glaeser discusses these shifts in some detail, but ultimately, his focus is on housing costs. In the last few decades, metro areas with exceptionally high productivity and a large share of high-skilled workers have experienced a virtuous circle, where they become magnets for other high-skilled workers and still greater productivity. However, when this shift is combined with limits on housing and infrastructure, these urban areas become unaffordably expensive for those who are not already high-skill workers. The young up-and-coming go-getter, especially if the person is thinking about starting a family, would have to hesitate before moving to these areas unless there is already a high-paying job lined up. 

At least in theory, at some point the cost of living in these high-cost, high-skilled, high-productivity areas should rise so high that economic activity is displaced to other areas. But at least so far, any such process of displacement has been very slow. What gets displaced to other metro areas is often a fairly standard factory or office, not the companies (or parts of companies) with the highest economic value-added.
There are essentially two non-exclusive approaches here. One is to make it easier for people of average and lower skill levels to move to the skill-magnet cities. The major step here is make it more affordable to live in those metro areas, which involves thinking about a large surge of housing construction along with patterns of transportation. The other approach is to figure out how to increase the number of cities with concentrations of high-skilled and highly productive workers, and in particular how to have such cities more spread across the country. I\’ve written about some proposals along these lines in \”Re-seeding America\’s Economic and Technological Future\” (January 31, 2020). Both of these approaches have numerous costs and tradeoffs, and actual policies to achieve these goals are largely unproven.
Although I don\’t have a proven policy to propose, I\’ll note  that this separation across US metro areas isn\’t healthy in economic terms. When people are hindered by high housing costs from moving to whether they could be more productive and earn higher wages, economic growth suffers. It\’s also not healthy in political terms. When looking at a political map of the United States, this separation between the cities that have become magnets for high-skilled, knowledge-based growth and the rest of the country is very stark. 

Some Ways the Pandemic Could Alter the Shape of the US Economy

It\’s clear that the COVID-19 pandemic has caused a recession. But is the form of this particular recession also reshaping the US economy in other ways? The Hamilton Project at the Brookings Institution recently published a set of four papers on the subject \”How COVID-19 is Reshaping the Future of Business and Work.\” Video of a one-hour discussion by some of the authors, along with links to the paper, is available here. 

Recessions often result from an imbalance in the economy— for example, overinvestment in a sector, asset bubbles, or excessive leverage by businesses and households—and a rapid change in expectations about the future. … The COVID-19 recession was precipitated by necessary collective action taken to preserve the lives of Americans and to buy time to put responsive public health measures in place; a partial shutdown of the economy resulted from decisions by federal, state, and local governments as well as decisions by businesses and households. The nature of the shutdown led to a much sharper contraction than during prior recessions but also—so far—to a shorter period during which the economy was contracting. The unemployment rate began to fall just two months after it initially rose, and job gains in May were the fastest on record (BLS 2020). Retail sales bounced up in May after a sharp downturn in April (U.S. Census Bureau 2020a). Still, the quick onset of the recovery has not meant a full rebound, and the resurgence of the virus in June and July may signal more ups and downs for the economy. Even if improvements in the labor market and spending continue to be significant, the U.S. economy will likely face a sharply elevated unemployment rate and sizable gap in output relative to precrisis levels for well over a year (Congressional Budget Office 2020).

What are some likely effects of this particular kind of recession? David Autor and Elisabeth Reynolds point out several of them in \”The Nature of Work after the COVID Crisis:Too Few Low-Wage Jobs \” (July 2020). One example is what they call \”telepresence,\” which is meant to describe a wider phenomenon than just telecommuting

Autor and Reynolds trace the term \”telepresence\” to an essay a few years back about underwater drones. As they write: 

Placing people in any physically hostile environment—such as at the bottom of the sea, in Earth’s upper atmosphere, at a bomb disposal site—entails costly, energy-intensive, life-support systems that provide climate control (i.e., oxygen, temperature regulation, atmospheric pressure), water delivery, waste disposal, and so on. By obviating these needs, telepresence not only reduces costs but also typically creates better functionality: machines unencumbered by physically present operators can take on tasks that would be perilous with humans aboard. These same lessons apply to workplaces.

Though (most) work environments are not overtly hostile to human life, they are expensive, duplicative places for performing tasks that many employees could telepresently accomplish from elsewhere, albeit with a loss of the important social aspects of work that they facilitate. Not only is providing and maintaining physical offices costly for employers, but also the need to be physically present in offices imposes substantial indirect costs on the employee. The Census Bureau estimates that U.S. workers spends on average of 27 minutes commuting to work one way, which cumulates to 225 hours per year (U.S. Census Bureau 2019; authors’ calculations). Arguably, many of us who perform “knowledge work” have been so accustomed to the habit of “being there” that we failed to notice the rapid improvements in the next best alternative: not being there.

Telepresence is not just a matter of commuting to previous job locations, of course. It also involves business travel, and rebalances the question of when it\’s worth a personal trip and when telepresence will suffice. It involves how telepresence may substitute for in-person visits in health care, education, retail shopping, perhaps even in areas like guidance for doing your own small repairs around the home. It involves shifts in the travel, hospitality, and entertainment businesses. Once these possibilities are discovered and explored, they will not just vanish from memory as the economy recovers. 
 Autor and Reynolds also point to prospects for \”urban de-densification\”: 

The past three decades have witnessed an urban renaissance. U.S. cities have seen steep reductions in crime, significant gains in racial and ethnic diversity, outsized increases in educational attainment, and a reversal of the tide of suburbanization that drew young, upwardly mobile families out of cities in earlier decades (Autor 2019; Autor and Fournier 2019; Berry and Glaeser 2005; Diamond 2016; Glaeser 2020). It seems plausible, though far from certain, that the postpandemic economy will see a partial reversal of these trends. If financiers, consultants, product designers, researchers, marketing executives, and corporate heads conclude that it is no longer necessary to commute daily to crowded downtown offices, and moreover, if business travelers find that they need to appear at these locations less frequently, this may spur a decline of the economic centrality, and even the cultural vitality, of cities.

Cities have needed to redefine and reinvent themselves repeatedly over the decades and centuries. In the aftermath of the pandemic, they may need to do it again. 
Autor and Reynolds also point to \”automation forcing,\” which is just the idea that in case you were already worried about automation and labor markets, now there is additional economic pressure to substitute machines for people. They write: 

Spurred by social distancing requirements and stay-at-home orders that generated a severe temporary labor shortage, firms have discovered new ways to harness emerging technologies to accomplish their core tasks with less human labor—fewer workers per store, fewer security guards and more cameras, more automation in warehouses, and more machinery applied to nightly scrubbing of workplaces. In June of 2020, for example, the MIT Computer Science and Artificial Intelligence Lab launched a fleet of warehouse disinfecting robots to reduce COVID risk at Boston area food banks (Gordon 2020). Throughout the world, firms and governments have deployed aerial drones to deliver medical supplies, monitor social distancing in crowds, and scan pedestrians for potential fever (Williams 2020). In the meatpacking industry, where the novel coronavirus has sickened thousands of workers, the COVID crisis will speed the adoption of robotic automation (Motlteni 2020). Surely, there are myriad other examples that are not yet widely known but will ultimately prove important. … As the danger of infection recedes and millions of displaced workers seek reemployment … [f]irms will not, however, entirely unlearn the labor-saving methods that they have recently developed. We can expect leaner staffing in retail stores, restaurants, auto dealerships, and meat-packing facilities, among many other places.

Yet another shift is that the business sector is seeing a wave of bankruptcies, which may lead to some monthly rates as bad as the depths of the Great Recession. On the other side, business start-up rates were already on a downward trend in the US economy. With more firms going broke and fewer start-ups, the economic stage seems set for large firms to play a bigger role. Moreover, some months down the road if and when the public health trends become more clear, a number of weakened firms will want to seek out partners for mergers as they try to reconstitute their strength.  Nancy L. Rose discusses these issues in  \”Will Competition Be Another COVID-19 Casualty?\” (July 2020). She writes: 
In the tech sector, responses to COVID-19 produced strong positive demand shocks for many firms engaged with the digital economy, as work, school, shopping, entertainment, and other traditionally in-person interactions all moved online (Koeze and Popper 2020). Social media sites saw increases in usage, and online video and streaming services reported record growth in demand, likely reflecting a combination of new users and more-intensive engagement by preexisting users. This has tended to reinforce the preexisting advantages of the largest firms, which often had the systems, logistics, and capacity to better accommodate the surge in demand associated with the shift online. This impact is likely to reinforce their dominant position not only during COVID-19 shutdowns, but also extending into the future. As many households tried online grocery shopping for the first time, for example, their experiences may keep them as regular online grocery shoppers even when the economy reopens, exacerbating the shift from brick-and-mortar retail to online shopping, and to the largest online grocers, including Amazon’s subsidiary, Whole Foods. If this reinforces the network advantages of these large platforms, it may become even more difficult for competitors to gain a toehold. As competition diminishes, consumers, workers, and suppliers all stand to lose.
These economic shifts will not affect all groups equally. The in-person service industries that are some of the hardest-hit in this recession are also industries that hired relatively large numbers of low-skilled workers and also women worker. Women who are parents are also experiencing a double-whammy in the recession, because they are more likely to bear the larger share of child-care responsibilities at a time when schools and child-care facilities are shut down. Betsey Stevenson discusses this issue in \”The Initial Impact of COVID-19 on Labor MarketOutcomes Across Groups and thePotential for Permanent Scarring\” (July 2020)

The pandemic has also hit women harder than men by the increased burden of care since children’s schools, daycare providers, and camps have closed, and many remain closed. Additionally, many families have had to consider how to best provide elder care and how to ensure the safety of those more vulnerable to the worst effects of COVID. Women’s traditional caregiving role and the crisis of care that many families are facing in the United States could have long-term repercussions for women’s labor force attachment and success, although we have yet to see this impact in the data. … 

While Congress has scrambled to save airlines on the belief that air travel is essential for a well-functioning modern economy, they have overlooked what is perhaps the most important industry in a modern economy: our child-care providers and schools. Parents will continue to struggle with child-care issues, particularly with the potential of children out of school and without child care this coming fall and the risk to grandparents of relying on them for child care. The pandemic has highlighted the fact that child care is not a women’s issue, it is not a personal issue, it is an economic issue; parents cannot fully return to work until they are able to ensure that their children can safely return to child-care and educational arrangements. The child-care crisis spurred by the pandemic could force families to make difficult decisions that will lead to lower labor force participation and lower earnings for decades to come. The solution to preventing large-scale permanent scarring, particularly among women, is to prioritize safely opening schools, to ensure that child-care centers do not go bankrupt and that the centers have the resources to adapt their buildings and practices to new protocols like improved air flow and increased surface disinfecting, and to encourage workplace flexibility

My youngest child graduated from 12th grade earlier this year, so I no longer have a child in the K-12 system. But I\’ll point out in passing that several nonpartisan organizations and reports have argued that for the good of the children, and with appropriate precautions in place,  K-12 schools should be reopened this fall. For example, here\’s a short statement from the the American Academic of Pediatrics (June 25, 2020) and here\’s a more detailed report from the National Academy of Sciences on
Reopening K-12 Schools During the COVID-19 Pandemic: Prioritizing Health, Equity, and Communities

Equality of Opportunity for Young Children

Children are born into very different settings: families with higher or lower incomes; two parents or one; neighborhoods with high poverty and unemployment; and many other differences. On average, and of course with lots of individual exceptions, these circumstances of early childhood matter. For example, Ariel Kalil and Rebecca Ryan write in their essay in the Spring 2020 issue of Future of Children (\”Parenting Practices and Socioeconomic Gaps in Childhood Outcomes\”): 

Socioeconomic status is correlated across generations. In the United States, 43 percent of adults who were raised in the poorest fifth of the income distribution now have incomes in the poorest fifth, and 70 percent have incomes in the poorest half. Likewise, among adults raised in the richest fifth of the income distribution, 40 percent have incomes in the richest fifth and 53 percent have incomes in the richest half. Many factors influence this intergenerational correlation, but evidence suggests that parenting practices play a crucial role. These include doing enriching activities with children, getting involved in their schoolwork, providing educational materials, and exhibiting warmth and patience. Parental behavior interpreted in this way probably accounts for around half of the variance in adult economic outcomes, and therefore contributes significantly to a country’s intergenerational mobility.

For those unfamiliar with the term, \”socioeconomic status\” or SES is common term in the social sciences. It\’s often defined a little differently across various studies, but it commonly includes some mixture of data on income, education, and occupation. This issue of Future of Children looks at a number of factors that have a bigger effect on children from low-SES families. Here are some examples: 

Melissa S. Kearney and Phillip B. Levine look at \”Role Models, Mentors, and Media Influences.\” They point out that children born into families with different income levels grow up in different neighborhoods: more adults who are high school dropouts and fewer who are college graduates, higher unemployment, more single mothers, more families receiving public assistance. 

They also look at data from the Child Development Supplement to the Panel Study of Income Dynamics (PSID-CDS) and find that the ways in which children spend time are different in lower-SES households: specifically, less time in school and family or other adults, and more time involved with media. They write: 

The amount of time children spend in school has risen over the years, particularly among preschool children. Between 1981–82 and 2014, the length of time spent in preschool has almost doubled, jumping from a little over two hours per weekday to four hours. This is consistent with the rise of full-day preschool programs during this period. …  We also see a large shift toward children spending much more weekend time with media, though weekday media exposure hasn’t changed much. Weekend media exposure has jumped by 62 percent among children ages 12 to 17, and by roughly 40 percent among younger children. The data show a corresponding drop in time spent with family, other adults, and peers.

Young low-SES children spent considerably more time exposed to media and considerably less time in school, as compared to higher-SES children. In fact, low-SES children between the ages of two and five spend more than twice as much time exposed to media as do high-SES children: 2.6 hours per day versus 1.2 hours per day. They also spend much less time in school: 3.7 hours per day versus 5.2 hours. … Other researchers have found especially large summer timeuse gaps across SES groups, most notably in children’s television viewing.

Other papers in the issue, like \”Peer and Family Effects in Work and Program Participation,\” by Gordon B. Dahl and \”Social Capital, Networks, and Economic Well-being,\” dig into the effects of growing up in neighborhoods with different social and peer networks. 

In their essay, Ariel Kalil and Rebecca Ryan \”provide an overview of what scholars know about the differences in parenting behavior by SES that contribute to differences in children’s outcomes by SES.\” They survey evidence  that \”high-SES parents have consistently engaged in a wide range of enriching activities in and outside the home—such as reading to children and taking them to the library or a museum—far more often than their lower-SES counterparts.\” They mention the well-known study of how many words children hear in their early years: 

A famous example of this difference comes from a study by Betty Hart and Todd Risley, who intensively observed the language patterns of 42 families with young children. They found that in professional families, children heard an average of 2,153 words per hour; in working-class families, the number was 1,251 words per hour; and in welfare-recipient families, it was only 616 words per hour. By age four, a child in a welfare-recipient family could have heard 32 million fewer words than a classmate in a professional family. More recent studies have clarified that the bulk of the difference in the number of words heard by children in higher- versus lower-SES families comes from words spoken directly to the children, not words said when children are present, and that the language used in higher-SES homes is more diverse and responsive to children’s speech than that in lower-SES homes. This SES-based difference in linguistic environments could plausibly contribute to SES-based gaps in children’s early language skills, especially given the robust evidence linking the quantity and quality of parents’ speech to young children to children’s early language development.

More broadly, research has found differences in parenting styles: 

Mothers living in poverty display less sensitivity during interactions with their babies than do their higher-SES counterparts, and in descriptive analyses these differences explain gaps in children’s early language outcomes and behavior problems. … Authoritative parenting describes a broad style of interacting in which parents place high demands on children but also use high levels of warmth and responsiveness. Authoritarian parenting, by contrast, is characterized by strict limits on children and little warmth or dialogue, and punishment tends to be harsh. Studies have found that parents—both mothers and fathers—with more education are more likely to use an authoritative style than less-educated parents, who are likelier to use either an authoritarian style or a permissive style (characterized by “low demands coupled with high levels of warmth and responsiveness”), a pattern we see within racial and ethnic groups and in cross-country comparisons. Supporting these broad differences in style, studies have also shown that lower-income parents use more directives and prohibitions in speech with children than their middle-income counterparts do. Finally, in a large national sample, researchers saw a significant negative correlation between punitive behavior (such as yelling and hitting) and income.

One reason behind these differences is the financial resources available to families. A common pattern is that lower-income families spend a greater share of income on their children than higher-income families. But in absolute dollars, the gap between what high- and low-income families spend on children is rising. Moreover, the gap in time spent with children by high-income and low-income families is also rising: 

The best evidence on differences in money spent on children across the socioeconomic distribution comes from two studies by Emory University sociologist Sabino Kornrich, using data from the Consumer Expenditure Survey. (This survey, conducted by the Bureau of Labor Statistics, provides data on the expenditures, income, and demographic characteristics of US consumers.) Kornrich and his colleague Frank Furstenberg found not only that parents at the top of the income distribution spend more on children’s enrichment than lower-income parents do, but also that the difference in real dollars has increased substantially since the 1970s. This spending gap has grown despite the fact that parents at all income levels are devoting an increasing share of their income to children, and that the lowest-income parents spend the largest share. …

That said, high-SES parents (especially mothers) tend to work more hours than lower-SES parents and have less discretionary time—but still spend more time with their children. This stems from fact that higher-SES parents (especially mothers) spend more of their childcare time primarily engaged in activities, while lower-SES mothers tend to spend childcare time being accessible to their children but largely engaged in housework or leisure activities. … [I]n a crossnational comparison study, highly educated mothers in many developed countries spent more time than less-educated mothers in primary child investment activities—even in Norway, where universal family policies are designed to equalize resources across parents.

The puzzle posed by Kalil and Ryan is that in survey data, families across different levels of income and education express similar beliefs about what characteristics their children will need to succeed. But on average, families with lower levels of education and income are not spending the same time or having the same success in providing these skills. 

Family structure surely plays a role here as well, and Melanie Wasserman contributes an essay on \”The Disparate Effects of Family Structure.\” The big-picture patterns are probably familiar to many readers, but at least for me, they have not lost their ability to shock. From the 1960s through the 1980s, there is a sharp decline in share of children living with two-parent families, which has leveled out since about 1990. 

The pattern is less extreme for some groups and more extreme for others. Here\’s the data on black children, where the share growing up with two parents dropped sharply in the 1960s and 1970s and has remained at that lower level since then. 
As Wasserman points out, the effects of growing up in a family without a father, and in a neighborhood with few fathers, seems to have a negative effect on boys in particular. 

Research indicates that growing up outside a family with two biological, married parents yields especially negative consequences for boys, with effects evident in educational, behavioral, and employment outcomes. On the other hand, the effects of family structure don’t vary systematically for white and minority youth—with the exception of black boys, who appear to fare especially poorly in families and low-income neighborhoods without fathers present. … The evidence on the disparate effects of family structure for certain groups of children may help explain certain aggregate US trends. For instance, although boys and girls are raised in similar family environments, attend similar schools, and live in similar neighborhoods, boys are falling behind in key measures of educational attainment, including high school and college completion. The fact that boys’ outcomes are particularly malleable to the family in which they’re raised provides an explanation for this disparity. … And when we’re considering policy, it’s important to emphasize that the benefits of being raised by continuously married parents don’t stem from marital status alone. Instead, parents’ characteristics, their resources, and children’s characteristics all work together. In particular, when their biological fathers have limited financial, emotional, and educational resources, children’s cognitive and behavioral outcomes are no better when they’re raised by married parents than when they’re raised by non-married parents.59 Perhaps for this reason, policies intended to encourage marriage or marriage stability among fathers with limited resources are unlikely to generate lasting benefits for children.

The issues offers a number of other angles and perspectives as well. For example, I posted a few weeks about about Daniel Hungerman\’s essay on investigates \”Religious Institutions and Economic Wellbeing,\” which explores an institution that also provides support and networking.  There are a couple of articles about articles about discrimination and bias, relating to the effects on parents and families, and also whether young people grow up believing that they have an  opportunity to succeed: 

\”How Discrimination and Bias Shape Outcomes,\” by Kevin Lang and Ariella Kahn-Lang Spitzer, and \”The Double-Edged Consequences of Beliefs about Opportunity and Economic Mobility,\” by Mesmin Destin.
Many of the article discuss possible policy interventions, and a number of the ideas are being pilot-tested in communities around the country. Here, I\’ll just say that a lot of the interventions are focused on families: for example, support groups or home visits for new parents, helping new parents set goals for the kind of parents they want to be and then follow-up on these goals, income support for new parents, job training and placement, and other steps. It seems important to continue experimenting with such programs and paying attention to the results. 
But I\’ll also add that it may be of equal or greater importance to focus on the communities in which children from low-SES households are growing up. What I have in mind here is support for public libraries, with after-school and evening hours; public parks with opportunities for recreation and hanging out; passes and concrete travel plans to go to museums, historical sites, state parks, cultural events;  pre-school and after-school programs; recreation centers; and just the basic provision of carving out ever-larger safe areas and times in neighborhoods. 

The Case for Income-Share Repayment of Student Loans

Student loans for higher education strike me as both sensible and crazy. The sensible part is that on average, a college degree raises income levels by more than the cost of college. The crazy part is that the United States is a country where someone from the ages of 18-20 is not legally allowed to order a beer, but is allowed to accumulate tens of thousands of dollars of debt. Moreover, basing an individual  decision on average outcomes is a risky business: as a statistics professor of mine used to say, \”On average, the Great Lakes don\’t freeze.\” Some students will complete college and have careers that easily allow them to repay college loans. At the other end of the spectrum, some student will take out loans and not complete college, or will complete college but end up on a lower-paid career path so that repaying the student loans is very hard. 
One way to think about income-contingent college loans is that they limit the risk of being unable to repay because of poor career outcomes, but they also can impose higher costs on those with good career outcomes. 
The basic idea is that when you borrow for your college loan, you promise to repay, say, 10% of your monthly income for the next 30 years. You never need to pay more than 10%: indeed, there are often provisions that if your income is especially low, you don\’t need to repay at all during that period. In addition, any remaining loans after 30 years are wiped out. So you know that your future payments will be a limited share of your income, for a limited time. 
On the other side, if you have a strong income-earning career, paying 10% per month may cover what you borrowed within a few years. However, in this case the rule might be that you need to keep paying until you have repaid twice the principal of the original loan. The underlying idea here is that in exchange for the flexibility to avoid repaying your loan, depending on your your career turns out, you need to promise to repay extra if your career turns out well. But even so, there is a cap on the total amount you need to repay. 
Proposals along these lines are not new. For example, back in 1993 the Journal of Economic Perspectives (where I work as Managing Editor) published an article on the topic by Alan B. Krueger and William G. Bowen (\”Policy Watch: Income-Contingent College Loans,\” 7:3, pp. 193-201). As they point out, the idea can be traced back to a 1955 essay by Milton Friedman titled \”The Role of Government in Education\”(in Economics and the Public Interest, edited by Robert A. Sol). Friedman wrote 65 years ago: 

This underinvestment in human capital presumably reflects an imperfection in the capital market: investment in human beings cannot be financed on the same terms or with the same ease as investment in physical capital. It is easy to see why there would be such a difference. … A loan to finance the training of an individual who has no security to offer other than his future earnings is therefore a much less attractive proposition than a loan to finance, say, the erection of a building: the security is less, and the cost of subsequent collection of interest and principal is very much greater.

A further complication is introduced by the inappropriateness of fixed money loans to finance investment in training. Such an investment necessarily involves much risk. The average expected return may be high, but there is wide variation about the average. Death or physical incapacity is one obvious source of variation but is probably much less important than differences in ability, energy, and good fortune. The result is that if fixed money loans were made, and were secured only by expected future earnings, a considerable fraction would never be repaid. … The device adopted to meet the corresponding problem for other risky investments is equity investment plus limited liability on the part of shareholders. The counterpart for education would be to \”buy\” a share in an individual\’s earning prospects: to advance him the funds needed to finance his training on condition that he agree to pay the lender a specified fraction of his future earnings. In this way, a lender would get back more than his initial investment from relatively successful individuals, which would compensate for the failure to recoup his original investment from the unsuccessful. …

One reason why such contracts have not become common, despite their potential profitability to both lenders and borrowers, is presumably the high costs of administering them, given the freedom of individuals to move from one place to another, the need for getting accurate income statements, and the long period over which the contracts would run. These costs would presumably be particularly high for investment on a small scale with a resultant wide geographical spread of the individuals financed in this way. Such costs may well be the primary reason why this type of investment has never developed under private auspices. But I have never been able to persuade myself that a major role has not also been played by the cumulative effect of such factors as the novelty of the idea, the reluctance to think of investment in human beings as strictly comparable to investment in physical assets, the resultant likelihood of irrational public condemnation of such contracts, even if voluntarily entered into, and legal and conventional limitation on the kind of investments that may be made by the financial intermediaries that would be best suited to engage in such investments, namely, life insurance companies. The potential gains, particularly to early entrants, are so great that it would be worth incurring extremely heavy administrative costs. …

Individuals should bear the costs of investment in themselves and receive the rewards, and they should not be prevented by market imperfections from making the investment when they are willing to bear the costs. One way to do this is to have government engage in equity investment in human beings of the kind described above. A governmental body could offer to finance or help finance the training of any individual who could meet minimum quality standards by making available not more than a limited sum per year for not more than a specified number of years, provided it was spent on securing training at a recognized institution. The individual would agree in return to pay to the government in each future year x per cent of his earnings in excess of y dollars for each $1,000 that he gets in this way. This payment could easily be combined with payment of income tax and so involve a minimum of additional administrative expense. The base sum, $y, should be set equal to estimated average –or perhaps modal–earnings without the specialized training; the fraction of earnings paid, x, should be calculated so as to make the whole project self-financing. In this way the individuals who received the training would in effect bear the whole cost.

But there are various signs that the idea of income-contingent loans is gaining some momentum. For example, back in 1998 the United Kingdom underwent a seismic shift in higher education. It shifted away from a model where tuition was government-paid, but free to students, and toward a model where universities would charge tuition. But at the same time, it set up a program of income-contingent loans. Here a quick overview from a report by  Jason D. Delisle and Preston Cooper (\”International Higher Education Rankings: Why No Country\’s Higher Education System Can Be the Best,\” American Enterprise Institute, August 2019), which I posted about last summer. They write: 

In England, where the vast majority of the country’s population is concentrated, universities charge undergraduate students tuition of up to $11,856, making English universities some of the most expensive in the world. … To enable students to afford these high fees, the government offers student loans that fully cover tuition. Ninety-five percent of eligible students borrow. Repayment is income contingent; new students pay back 9 percent of their income above a threshold for up to 30 years, after which remaining balances are forgiven. Despite the lengthy term, the program is heavily subsidized: The government estimates that just 45 percent of borrowers who take out loans after 2016 will repay them in full … England’s high-resource, high-tuition model is relatively new. Until 1998, English universities were tuition-free, with the government directly appropriating the vast majority of higher education funding.

Another sign of the attractiveness of income-contingent loans is that at some institutions–Purdue University is a leading example–have started providing such loans. Tim Sablik tells the story in \”Education without Loans: Some schools are offering to buy a share of students\’ future income in exchange for funding their education\” (Econ Focus, Federal Reserve Bank of Richmond, First Quarter 2020). Rather than referring to this arrangement as an \”income-contingent loan,\” Sablik\’s article refers to it as an \”income share agreement\” or ISA: 

ISAs provide students with funding to cover their education expenses in exchange for a portion of their income once they start working. Under a typical contract, recipients pledge to pay a fixed percentage of their incomes for a set period of time up to an agreed cap. For example, a student who has $10,000 of his or her tuition covered through an ISA might agree to repay 5 percent of his or her monthly income for the next 120 months (10 years), up to a maximum of $20,000. ISAs typically also have a minimum income threshold before payments kick in; if the recipient earns less than the minimum, he or she pays nothing. This means that ISAs offer students more downside protection than a traditional loan.

A company called Vemo \”reports that it works with more than 75 schools and training programs to offer ISAs.\” Many schools limit such programs to those who have already made progress toward completing certain courses of study, and just need a financial boost to make it over the finish line. 

The United States is seeing a shift toward income-contingent loans as well. Here\’s a comment from a e Congressional Budget Office report on \”Income-Driven Repayment Plans for Student Loans: Budgetary Costs and Policy Options\” (February 2020), which I discussed in a post earlier this year:

Between 1965 and 2010, most federal student loans were issued by private lending institutions and guaranteed by the government, and most student loan borrowers made fixed monthly payments over a set period—typically 10 years. Since 2010, however, all federal student loans have been issued directly by the federal government, and borrowers have begun repaying a large and growing fraction of those loans through income-driven repayment plans.

Under the most popular income-driven plans, borrowers’ payments are 10 or 15 percent of their discretionary income, which is typically defined as income above 150 percent of the federal poverty guideline. Furthermore, most plans cap monthly payments at the amount a borrower would have paid under a 10-year fixed-payment plan. … Borrowers who have not paid off their loans by the end of the repayment period—typically 20 or 25 years—have the outstanding balance forgiven. (Qualifying borrowers may receive forgiveness in as little as 10 years under the Public Service Loan Forgiveness, or PSLF, program.)

For me, the idea of income-contingent loans is a useful way of striking a balance between financial access to higher education and protecting students from being stuck with large and lifelong debts. (There are even legal provisions for garnishing Social Security benefits to repay student loans. The idea that this step is either necessary or possible seems demented.)  I also think that for many undergraduate students, telling them that they are promising to pay a certain percentage of income over the next 2-3 decades would put such loans in a more honest and open context. 

But as usual, I\’m all about acknowledging the tradeoffs, which arise for both economic and political reasons. Let\’s make the plausible assumption that students have some ability to know in advance whether they are going to be able to repay a conventional student loan within a few years. Those who are more likely to repay will take out conventional student loans rather than income-contingent loans–so they can avoid the risk of repaying more than they borrowed. Those who are less likely to repay will take out income-contingent loans, so they have greater protection against difficulties in meeting a conventional repayment plan.  This adverse selection dynamic suggests the underpayers are likely to outnumber the overpayers. 
In addition, politicians aren\’t great at setting up loan repayment plans. When setting the share of income to be repaid, or the length of time, political forces will tend to choose numbers that are unrealistically low in an actuarial sense. Politicians are also continually tempted to come up with lists of exceptions where repayment doesn\’t need to be made: for certain careers, in certain locations, during certain economic condition, and so on and so on. 
Putting these forces together, it seems likely that a substantial share of income-contingent loans will not be repaid in full. A rough estimate based on the UK experience and on the US experience since 2010 is that about half of income-contingent loans, at least under the current rules, will not be repay the full principal and interest. I\’m OK with some government subsidy of higher education, at both the state and federal level. But an honest plan for income-contingent loans may need to have higher repayment rates, for longer periods, than politicians and borrowers would prefer. 

Interview with Melissa Dell: Persistence Across History

Tyler Cowen inteviews Melissa Dell, the most recent winner of the Clark medal (which \”is awarded annually .. to that American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge). Both audio and a transcript of the one-hour conversation are available. From the overview: 

Melissa joined Tyler to discuss what’s behind Vietnam’s economic performance, why persistence isn’t predictive, the benefits and drawbacks of state capacity, the differing economic legacies of forced labor in Indonesia and Peru, whether people like her should still be called a Rhodes scholar, if SATs are useful, the joys of long-distance running, why higher temps are bad for economic growth, how her grandmother cultivated her curiosity, her next project looking to unlock huge historical datasets, and more.

Here, I\’ll just mention a couple of broad points that caught my eye. Dell specializes in looking at how conditions in at one point in time–say, being in an area which for a time has strong centralized tax-collecting government–can have persistent effects on economic outcomes decades or even centuries later. For those skeptical of such effects, Dell argues that explaining, say, 10% of a big difference between two areas is a meaningful feat for social science. She says: 

I was presenting some work that I’d done on Mexico to a group of historians. And I think that historians have a very different approach than economists. They tend to focus in on a very narrow context. They might look at a specific village, and they want to explain a hundred percent of what was going on in that village in that time period. Whereas in this paper, I was looking at the impacts of the Mexican Revolution, which is a historical conflict in economic development. And this historian, who had studied it extensively and knows a ton, was saying, “Well, I kind of see what you’re saying, and that holds in this case, but what about this exception? And what about that exception?”

And my response was to say my partial R-squared, which is the percent of the variation that this regression explains, is 0.1, which means it’s explaining 10 percent of the variation in the data. And I think, you know, that’s pretty good because the world’s a complex place, so something that explains 10 percent of the variation is potentially a pretty big deal.

But that means there’s still 90 percent of the variation that’s explained by other things. And obviously, if you go down to the individual level, there’s even more variation there in the data to explain. So I think that in these cases where we see even 10 percent of the variation being explained by a historical variable, that’s actually really strong persistence. But there’s a huge scope for so many things to matter.

I’ll say the same thing when I teach an undergrad class about economic growth in history. We talk about the various explanations you can have: geography, different types of institutions, cultural factors. Well, there’s places in sub-Saharan Africa that are 40 times poorer than the US. When you have that kind of income differential, there’s just a massive amount of variation to explain.

Nathan Nunn’s work on slavery and the role that that plays in explaining Africa’s long-run underdevelopment — he gets pretty large coefficients, but they still leave a massive amount of difference to be explained by other things as well, because there’s such large income differences between poor places in the world and rich places. I think if persistence explains 10 percent of it, that’s a case where we see really strong persistence, and of course, there’s other cases where we don’t see much. So there’s plenty of room for everybody’s preferred theory of economic development to be important just because the differences are so huge.

Dell also discusses a project to organize historical data, like old newspapers, in ways that will make them available for empirical analysis.  She says: 

I have a couple of broad projects which are, in substance, both about unlocking data on a massive scale to answer questions that we haven’t been able to look at before. If you take historical data, whether it be tables or a compendia of biographies or newspapers, and you go and you put those into Amazon Textract or Google Cloud Vision, it will output complete garbage. It’s been very specifically geared towards specific things which are like single-column books and just does not do well with digitizing historical data on a large scale. So we’ve been really investing in methods in computer vision as well as in natural language processing to process the output so that we can take data, historical data, on a large scale. These datasets would be too large to ever digitize by hand. And we can get them into a format that can be used to analyze and answer lots of questions.

One example is historical newspapers. We have about 25 million-page scans of front pages and editorial pages from newspapers across thousands and thousands of US communities. Newspapers tend to have a complex structure. They might have seven columns, and then there’s headlines, and there’s pictures, and there’s advertisements and captions. If you just put those into Google Cloud Vision, again, it will read it like a single-column book and give you total garbage. That means that the entire large literature using historical newspapers, unless it uses something like the New York Times or the Wall Street Journal that has been carefully digitized by a person sitting there and manually drawing boxes around the content, all you have are keywords.

You can see what words appear on the page, but you can’t put those words together into sentences or into paragraphs. And that means we can’t extract the sentiment. We don’t understand how people are talking about things in these communities. We see what they’re talking about, what words they use, but not how they’re talking about it.

So, by devising methods to automatically extract that data, it gives us a potential to do sentiment analysis, to understand, across different communities in the US, how people are talking about very specific events, whether it be about the Vietnam War, whether it be about the rise of scientific medicine, conspiracy theories — name anything you want, like how are people in local newspapers talking about this? Are they talking about it at all?

We can process the images. What sort of iconic images are appearing? Are they appearing? So I think it can unlock a ton of information about news.

We’re also applying these techniques to lots of firm-level and individual-level data from Japan, historically, to understand more about their economic development. We have annual data on like 40,000 Japanese firms and lots of their economic output. This is tables, very different than newspapers, but it’s a similar problem of extracting structure from data, working on methods to get all of that out, to look at a variety of questions about long-run development in Japan and how they were able to be so successful.