The World Trade Statistical Review 2020 from the World Trade Organization is an annual report which is mainly focused on detailed data for trade patterns from the previous year. I find that it requires some mental effort to remember what the world economy looked like in 2019, but of course, trade tensions were already high. The value of global merchandise trade fell 3% in 2019–the first time it had fallen since the Great Recession years back in 2008-9–while the value of services trade rose 2.1%. However, this year\’s report also includes some preliminary data on how aspects of global trade have evolved since the COVID-19 pandemic hit earlier this year. For example:
Slavery and the History of US Economic Growth
Slavery was both a set of economic arrangements and also a raw authoritarian human rights violation. It\’s unsurprising that there has been long-standing controversy over the relationship: for example, did slavery in the United States boost to economic growth or hold it back? Gavin Wright revisits these issues in \”Slavery and Anglo‐American capitalism revisited\” (Economic History Review, May 2020, 73:2, pp. 353-383, subscription required). The paper was also the subject of the Tawney lecture at the Economic History Society meetings in 2019, and the one-hourlecture can be freely viewed here.
The prominence of slave-based commerce for the Atlantic economy provides the background for the arresting connections reported by C. S. Wilder in his book Ebony and ivy, associating early American universities with slavery. The first five colleges in British America were major beneficiaries of the African slave trade and slavery. ‘Harvard became the first in a long line of North American schools to target wealthy planters as a source of enrollments and income’. The reason for what might seem an incongruous liaison is not hard to identify: ‘The American college was an extension of merchant wealth’. A wealthy merchant in colonial America was perforce engaged with the slave trade or slave-based commerce.
However, as numerous writers have pointed out over time, the coexistence of slavery with British and American capitalism of the 17th century does not prove that slavery was necessary or sufficient for an emerging capitalism. As many writers have pointed out, historical slavery across what we now call Latin America. At that time, Spain and Portugal (among others) were also active participants in the slave trade, yet their economies did not develop an industrial revolution like that of the UK. Countries all over Latin America were recipients of slaves, like the area that became the US, but those countries did not develop a US-style economy. Clearly, drawing a straight line from slavery to capitalism of the Anglo-American variety would be wildly simplistic.
The Atlantic economy of the eighteenth century was propelled by sugar, a quintessential slave crop. In contrast, cotton required no large investments of fixed capital and could be cultivated efficiently at any scale, in locations that would have been settled by free farmers in the absence of slavery. Early mainland cotton growers deployed slave labour, not because of its productivity or aptness for the new crop, but because they were already slave owners, searching for profitable alternatives to tobacco, indigo, and other declining crops. Slavery was, in effect, a ‘pre-existing condition’ for the nineteenth-century American South.
The best evidence that slavery was not essential for cotton supply is what happened after slavery’s demise. The wartime and postwar years of ‘cotton famine’ were times of great hardship for Lancashire, only partially mitigated by high-cost imports from India, Egypt, and Brazil. After the war, however, merchants and railroads flooded into the south-east, enticing previously isolated farm areas into the cotton economy. Production in plantation areas gradually recovered, but the biggest source of new cotton came from white farmers in the Piedmont. When the dust settled in the 1880s, India, Egypt, and slave-using Brazil had retreated from world markets, and the price of cotton in Lancashire was back to its antebellum level …
First, \”[t]he region closed the African slave trade in 1807 and failed to recruit free labour,
making labour supply inelastic.\” Why were slaveowners against having more slaves? As Wright points out: \”After voting for secession in 1861 by 84 to 14, the Mississippi convention voted down a re-opening resolution by 66 to 13. The reason for this ostensible contradiction is not difficult to identify: to
re-open the African trade was to threaten the wealth of thousands of slaveholders across the South.\” In short, bringing in more slaves would have reduce the price of existing slaves–so existing slaveowners were against it. In addition, immigrant to the US from, say, 1820 to 1880 overwhelmingly went to free states. Slave states in the southwest \”displayed net white outmigration, even during cotton booms, at times when one might have expected a rush of immigration. One result was low population density and a level of cotton production well below potential.\”
Third, \”the fixed-cost character of slavery meant that even large plantations aimed at self-sufficiency in foodstuffs, limiting the overall degree of market specialization.\” One main advantage of slavery in cotton production was that it guaranteed having sufficient labor available at the two key times of the year for cotton: planting and harvesting. But during the rest of the year, most cotton plantations grew other crops and raised livestock
The shortcomings of the South as a cotton producer during this time were clear to some contemporary observers. Wright says: \”Particularly notable are the views of Thomas Ellison, long-time chronicler and statistician of cotton markets, who observed in 1858: `That the Southern regions of the United States are capable of producing a much larger quantity of Cotton than has yet been raised is very evident; in fact, their resources are, practically speaking, almost without limit’. What was it that restrained this
potential supply? Ellison had no doubt that the culprit was slavery …\”
A Gentle Case for Paying Kidney Donors
Today, 15 percent of Americans suffer from chronic kidney disease. Of these, roughly 800,000 have progressed to end-stage renal disease, where kidney function has been reduced to 10 to 15 percent of normal capacity. Most of them – half a million or so – require regular dialysis, and eventually a transplant, to survive.Dialysis sustains life, yet it is far from a perfect substitute for normal kidney function. It is a time-consuming process that often leaves patients fatigued, with increased risks of infection and sepsis, and subject to a number of other ailments. What’s more, dialysis is very expensive, with an average annual cost of $90,000 that is largely underwritten by government. In 2018 alone, Medicare spent $114 billion on chronic kidney disease patients, with the end-stage renal disease population, which makes up a meager 1 percent of the total Medicare population, accounting for more than $35 billion. And this figure does not include spending by private insurers or patients’ out-of-pocket payments.
Kidney transplantation is superior to dialysis in every way. It not only increases the quality of life for patients, but also substantially decreases long-term costs of care for patients with ESRD [end-stage renal disease]. All told, a kidney transplant is worth on the order of a half-million dollars to kidney disease sufferers and those who share the cost of dialysis. Transplants are also head and shoulders above dialysis in terms of life expectancy. While the five-year survival rate for end-stage renal disease is 35 percent, it increases to 97 percent for those receiving transplants.
One unsurprising result of the explosion in end-stage renal disease is that kidney procurement has consistently failed to provide enough organs for transplants. The waiting list for kidneys has ranged from 76,000 to 87,000 over the past decade, as more than 20,000 individuals are added to the rolls each year. And with demand increasing at around 10 percent annually, a lot of those in need are just out of luck. On average, 13 people die each day waiting for kidneys (and another seven die waiting for other organs). It is highly unlikely that more effective appeals to the kindness of others will solve the shortage long term. It certainly hasn’t so far.
Yes, there are hard questions about the incentives involved with paying for kidney donations, but the hard questions cut in both directions. Haeder writes:
The very idea of putting prices on body parts infuriates many by besmirching the ideal of altruistic donations. Of course, the altruism in the current transplantation process stops with the donor, the recipient and their families – everyone else is getting paid. Moreover, proponents rightfully point out that we already allow compensation to individuals for donations of blood plasma and for providing surrogate motherhood services, so the expansion to organs would be a change in degree only.
We have long been perfectly willing to exploit the poor by paying them to enroll in potentially dangerous prescription drug trials – and, most importantly, by encouraging them to put their lives on the line by joining the military.
For previous posts with relevance to the intersection of economics, incentives, and paying for kidney transplants, or paying for blood, plasma, bone marrow, and breast milk, see:
- \”Some Thoughts on Commodification\” (May 1, 2020)
- \”Paying Kidney Donors: Covering Expenses?\” (August 8, 2019)
- \”Global Kidney Exchange\” (March 7, 2017)
- \”Paying Bone Marrow Donors\” (November 23, 2016)
- \”Volunteers for Blood, Paying for Plasma\” (May 16, 2014)
- \”What if Government Paid Kidney Donors\” (December 2, 2015)
- \”Selling a Kidney: Would the Option Necessarily be Beneficial?\”(March 12, 2014)
- \”The Human Breast Milk Market\” (August 24, 2015)
Federal Reserve Assets Explode in Size (Again)
The size and composition of assets held by the Federal Reserve has evolved noticeably over the past decade. At the onset of the financial crisis [in 2008], the level of securities held outright declined as the Federal Reserve sold Treasury securities to accommodate the increase in credit extended through liquidity facilities. Though the various liquidity facilities wound down significantly over the course of 2009, the level of securities held outright expanded significantly between 2009 and late 2014 as the FOMC conducted a series of large-scale asset purchase programs to support the U.S. economy. Then, securities held outright declined as a result of the FOMC\’s balance sheet normalization program that took place between October 2017 and August 2019.
An Update Concerning the Economics of Lighthouses
Lighthouses have been a canonical example for economists–but what that example is intended to illustrate has shifted dramatically over time.
[I]t is a proper office of government to build and maintain lighthouses, establish buoys, &c. for the security of navigation: for since it is impossible that the ships at sea which are benefited by a lighthouse, should be made to pay a toll on the occasion of its use, no one would build lighthouses from motives of personal interest, unless indemnified and rewarded from a compulsory levy made by the state.
The role of the government was limited to the establishment and enforcement of property rights in the lighthouse. … [E]conomists wishing to point to a service which is best provided by the government should use an example which has a more solid backing.
The crucial technological change was in the illumination apparatus, with the introduction of mirrors in the 1780s and Fresnel lenses in the 1820s. This was not only a change in technical performance, as each development increased the brightness by more than an order of magnitude. It also brought about the sort of social and institutional transformations that historians of technology have identified as a technological system. As lighthouses became reliably visible at safe distances for sea-coast lighting the first time, their purpose and function changed, as well as their costs and financing. The lighthouse system of the seventeenth century discussed by Coase was fundamentally different from that of John Stuart Mill and Paul Samuelson, with different expectations, expenses, and implications for excludability. While a market could support the lights that existed before 1780, which were primarily effective at close range, it could not support the transformed system that emerged in the wake of improved illumination. Nor could the market provide for the technological improvements, with no private owners of lighthouses investing in Fresnel lenses, one of the key improvements. Only after England introduced greater state intervention did the lights improve.
The private lighthouses of the 17th and 18th centuries mostly burned candles or coal, and could be seen for no more than about five miles. But in the late 18th century and into the 19th century, there was a wave of innovation involving oil lamps for illumination and lenses to focus the light out to sea. But Levitt argues that the breakthrough innovation was the Fresnel lens in 1820. France had abolished private lighthouses in 1792, and instead used a government Lighthouse Commission. This government commission hired August Fresnel to design and install this new light, which was essentially visible all the way to the horizon. Levitt describes the next step:
The French Lighthouse Commission placed Fresnel in charge of a massive overhaul of the French lighthouse system known as the Carte des phares. … Instead of focusing on port entrances alone, the plan was now to form a rational network which would illuminate the entire coast, so that whenever a ship went out of sight of one lighthouse, it would already be entering into sight of another. … Fresnel divided the lights into different orders based on their size, with the largest, first-order lights warning of a ship’s first approach to the coast, second-order lights aiding in the navigation of tricky passages, and the smaller “feux de port” marking port entrances. …
One of the key attributes of the system was that it would allow mariners to distinguish between the lights, and thus know their precise location at all times. Fresnel proposed three distinct light signatures: a rotating light of eight bulls-eye panels that flashed every minute, a rotating light of sixteen panels that flashed every thirty seconds and a fixed light that gave out a continuous beam . The rotating lenses were mounted on columns that were turned first by an escapement mechanism, then by chariot wheels, and finally on a mercury bath (an idea conceived by Fresnel but not put into practice until the 1890s). The parabolic apparatus could also be rotated to produce a distinct flashing light. But Fresnel went one step further: carefully arranging the various lights so that no two similar ones were alike, and, with the visibility of lights overlapping, a sailor would be able to identify their location with precision. Jules Michelet commemorated the completion of the project in 1854 with the phrase, “For the sailor who steers by the stars, it was as if another heaven had descended to earth.”
Levitt makes a case that when John Stuart Mill was writing about lighthouses back in 1848, he was well aware of these changes. It turns out that one of Mill\’s childhood teaches was a British leader in efforts for upgrading Britain\’s lighthouses, which for a long time lagged behind those of France. The US soon adopted the French approach.
The United States had leapfrogged over Britain as well after adopting the French model of lighthouse provisioning. The colonies built ten lighthouses under British rule … The federal government appropriated them in 1789 in the first application of the Commerce Clause. Substantial investment in infrastructure only came in the 1850s, however, when a coalition of sailors, scientists, and engineers demanded the creation of a lighthouse board modeled on the French Lighthouse Committee. This board effected a massive, tax-funded program of new building and updated technology, and by the end of 1859 the United States had more than twice as many lighthouses as the British, virtually all of them equipped them with Fresnel lenses.
Thus, the history of lighthouses suggests a lesson that for focused and local projects with static technology, a public-private partnership can work well. But for development and investment in a new technology which will be applied to an interconnected national network where it is difficult to charge end-users for value received, government may usefully play a larger role. Levitt writes:
This fact can also help us understand the lighthouse’s transformation into a public good. When lights simply marked harbors, one could charge every ship that entered the harbor. But when a light marked some empty, desolate stretch of coast, it was not so easy to charge whoever happened to pass by. Rather than being “plucked from the air,” Mill’s position accurately reflected the new situation. … A more detailed study of lighthouse administration supports the status of the lighthouse as a public good: the private market failed to either develop or invest in the technology necessary to establish the effective sea-coast lights now associated with the term “lighthouse.”
The Closing of the (Urban) Frontier
In a recent bulletin of the Superintendent of the Census for 1890 appear these significant words: “Up to and including 1880 the country had a frontier of settlement, but at present the unsettled area has been so broken into by isolated bodies of settlement that there can hardly be said to be a frontier line. In the discussion of its extent, its westward movement, etc., it can not, therefore, any longer have a place in the census reports.” This brief official statement marks the closing of a great historic movement. Up to our own day American history has been in a large degree the history of the colonization of the Great West. The existence of an area of free land, its continuous recession, and the advance of American settlement westward, explain American development.
America\’s narrative of opportunity and mobility shifted at the tail end of the 19th century. In the 20th century, the narrative focused much less on a rural frontier, and instead was a story of moving to the city, where the opportunities to raise one\’s economic status and social status. But over the last 50 years or so, Edward L. Glaeser argues, we have in effect seen \”The Closing of America’s Urban Frontier\” (Cityscape: US Department of Housing and Urban Development, 2020, 22:2, pp. 5-21).
In a sense, America’s urban frontier became more open during Turner’s lifetime [1861-1932] because the traditional downsides of urban crowding, such as contagious disease, became less problematic. The urban frontier remained largely open during the dynamic 25 years that followed World War II. African-Americans migrated north by the millions to flee the Jim Crow South and take advantage of urban industrial jobs. Americans built new car-oriented cities in Sun Belt states like Arizona and Texas. The movement of people and firms diminished the vast income differences that once existed between locations.
Sometime around 1970, the urban frontier began to close. Community groups mobilized and opposed new housing and infrastructure. Highway revolts slowed urban expansion in car-oriented suburbs. Historic preservation made it more difficult to add new density in older cities. Suburbs crafted land-use restrictions that stopped new construction. While some productive Sun Belt cities still permitted significant amounts of new housing, even those one-time refuges of affordable urbanism had begun to be more restrictive. …
Migration has fallen dramatically over the past 20 years, and poorer migrants no longer move disproportionately to richer places. Housing costs have risen sharply in more productive places, which has generated a wealth shift from the young to the old. Income convergence across regions has stalled. … America’s growing geographic sclerosis makes it increasingly difficult for out-migration to solve the problems of local joblessness. …
The closing of America’s urban frontier seems to be a far more significant event in American economic history than Turner’s motivating fact that “the unsettled area has been so broken into by isolated bodies of settlement that there can hardly be said to be a frontier line.” Vast amounts of the American West were unpopulated in Turner’s time and remain so today. Cheap land could still be had for homesteading in 1893, and there remains plenty of inexpensive ranchland today for anyone who wants the rugged life of a mid-19th century frontiersman. By contrast, the high price of accessing America’s most productive urban areas today is an important fact of life for tens of millions of Americans.
Glaeser discusses these shifts in some detail, but ultimately, his focus is on housing costs. In the last few decades, metro areas with exceptionally high productivity and a large share of high-skilled workers have experienced a virtuous circle, where they become magnets for other high-skilled workers and still greater productivity. However, when this shift is combined with limits on housing and infrastructure, these urban areas become unaffordably expensive for those who are not already high-skill workers. The young up-and-coming go-getter, especially if the person is thinking about starting a family, would have to hesitate before moving to these areas unless there is already a high-paying job lined up.
Some Ways the Pandemic Could Alter the Shape of the US Economy
Recessions often result from an imbalance in the economy— for example, overinvestment in a sector, asset bubbles, or excessive leverage by businesses and households—and a rapid change in expectations about the future. … The COVID-19 recession was precipitated by necessary collective action taken to preserve the lives of Americans and to buy time to put responsive public health measures in place; a partial shutdown of the economy resulted from decisions by federal, state, and local governments as well as decisions by businesses and households. The nature of the shutdown led to a much sharper contraction than during prior recessions but also—so far—to a shorter period during which the economy was contracting. The unemployment rate began to fall just two months after it initially rose, and job gains in May were the fastest on record (BLS 2020). Retail sales bounced up in May after a sharp downturn in April (U.S. Census Bureau 2020a). Still, the quick onset of the recovery has not meant a full rebound, and the resurgence of the virus in June and July may signal more ups and downs for the economy. Even if improvements in the labor market and spending continue to be significant, the U.S. economy will likely face a sharply elevated unemployment rate and sizable gap in output relative to precrisis levels for well over a year (Congressional Budget Office 2020).
What are some likely effects of this particular kind of recession? David Autor and Elisabeth Reynolds point out several of them in \”The Nature of Work after the COVID Crisis:Too Few Low-Wage Jobs \” (July 2020). One example is what they call \”telepresence,\” which is meant to describe a wider phenomenon than just telecommuting.
Placing people in any physically hostile environment—such as at the bottom of the sea, in Earth’s upper atmosphere, at a bomb disposal site—entails costly, energy-intensive, life-support systems that provide climate control (i.e., oxygen, temperature regulation, atmospheric pressure), water delivery, waste disposal, and so on. By obviating these needs, telepresence not only reduces costs but also typically creates better functionality: machines unencumbered by physically present operators can take on tasks that would be perilous with humans aboard. These same lessons apply to workplaces.
Though (most) work environments are not overtly hostile to human life, they are expensive, duplicative places for performing tasks that many employees could telepresently accomplish from elsewhere, albeit with a loss of the important social aspects of work that they facilitate. Not only is providing and maintaining physical offices costly for employers, but also the need to be physically present in offices imposes substantial indirect costs on the employee. The Census Bureau estimates that U.S. workers spends on average of 27 minutes commuting to work one way, which cumulates to 225 hours per year (U.S. Census Bureau 2019; authors’ calculations). Arguably, many of us who perform “knowledge work” have been so accustomed to the habit of “being there” that we failed to notice the rapid improvements in the next best alternative: not being there.
The past three decades have witnessed an urban renaissance. U.S. cities have seen steep reductions in crime, significant gains in racial and ethnic diversity, outsized increases in educational attainment, and a reversal of the tide of suburbanization that drew young, upwardly mobile families out of cities in earlier decades (Autor 2019; Autor and Fournier 2019; Berry and Glaeser 2005; Diamond 2016; Glaeser 2020). It seems plausible, though far from certain, that the postpandemic economy will see a partial reversal of these trends. If financiers, consultants, product designers, researchers, marketing executives, and corporate heads conclude that it is no longer necessary to commute daily to crowded downtown offices, and moreover, if business travelers find that they need to appear at these locations less frequently, this may spur a decline of the economic centrality, and even the cultural vitality, of cities.
Spurred by social distancing requirements and stay-at-home orders that generated a severe temporary labor shortage, firms have discovered new ways to harness emerging technologies to accomplish their core tasks with less human labor—fewer workers per store, fewer security guards and more cameras, more automation in warehouses, and more machinery applied to nightly scrubbing of workplaces. In June of 2020, for example, the MIT Computer Science and Artificial Intelligence Lab launched a fleet of warehouse disinfecting robots to reduce COVID risk at Boston area food banks (Gordon 2020). Throughout the world, firms and governments have deployed aerial drones to deliver medical supplies, monitor social distancing in crowds, and scan pedestrians for potential fever (Williams 2020). In the meatpacking industry, where the novel coronavirus has sickened thousands of workers, the COVID crisis will speed the adoption of robotic automation (Motlteni 2020). Surely, there are myriad other examples that are not yet widely known but will ultimately prove important. … As the danger of infection recedes and millions of displaced workers seek reemployment … [f]irms will not, however, entirely unlearn the labor-saving methods that they have recently developed. We can expect leaner staffing in retail stores, restaurants, auto dealerships, and meat-packing facilities, among many other places.
In the tech sector, responses to COVID-19 produced strong positive demand shocks for many firms engaged with the digital economy, as work, school, shopping, entertainment, and other traditionally in-person interactions all moved online (Koeze and Popper 2020). Social media sites saw increases in usage, and online video and streaming services reported record growth in demand, likely reflecting a combination of new users and more-intensive engagement by preexisting users. This has tended to reinforce the preexisting advantages of the largest firms, which often had the systems, logistics, and capacity to better accommodate the surge in demand associated with the shift online. This impact is likely to reinforce their dominant position not only during COVID-19 shutdowns, but also extending into the future. As many households tried online grocery shopping for the first time, for example, their experiences may keep them as regular online grocery shoppers even when the economy reopens, exacerbating the shift from brick-and-mortar retail to online shopping, and to the largest online grocers, including Amazon’s subsidiary, Whole Foods. If this reinforces the network advantages of these large platforms, it may become even more difficult for competitors to gain a toehold. As competition diminishes, consumers, workers, and suppliers all stand to lose.
The pandemic has also hit women harder than men by the increased burden of care since children’s schools, daycare providers, and camps have closed, and many remain closed. Additionally, many families have had to consider how to best provide elder care and how to ensure the safety of those more vulnerable to the worst effects of COVID. Women’s traditional caregiving role and the crisis of care that many families are facing in the United States could have long-term repercussions for women’s labor force attachment and success, although we have yet to see this impact in the data. …
While Congress has scrambled to save airlines on the belief that air travel is essential for a well-functioning modern economy, they have overlooked what is perhaps the most important industry in a modern economy: our child-care providers and schools. Parents will continue to struggle with child-care issues, particularly with the potential of children out of school and without child care this coming fall and the risk to grandparents of relying on them for child care. The pandemic has highlighted the fact that child care is not a women’s issue, it is not a personal issue, it is an economic issue; parents cannot fully return to work until they are able to ensure that their children can safely return to child-care and educational arrangements. The child-care crisis spurred by the pandemic could force families to make difficult decisions that will lead to lower labor force participation and lower earnings for decades to come. The solution to preventing large-scale permanent scarring, particularly among women, is to prioritize safely opening schools, to ensure that child-care centers do not go bankrupt and that the centers have the resources to adapt their buildings and practices to new protocols like improved air flow and increased surface disinfecting, and to encourage workplace flexibility
My youngest child graduated from 12th grade earlier this year, so I no longer have a child in the K-12 system. But I\’ll point out in passing that several nonpartisan organizations and reports have argued that for the good of the children, and with appropriate precautions in place, K-12 schools should be reopened this fall. For example, here\’s a short statement from the the American Academic of Pediatrics (June 25, 2020) and here\’s a more detailed report from the National Academy of Sciences on
Reopening K-12 Schools During the COVID-19 Pandemic: Prioritizing Health, Equity, and Communities.
Equality of Opportunity for Young Children
Socioeconomic status is correlated across generations. In the United States, 43 percent of adults who were raised in the poorest fifth of the income distribution now have incomes in the poorest fifth, and 70 percent have incomes in the poorest half. Likewise, among adults raised in the richest fifth of the income distribution, 40 percent have incomes in the richest fifth and 53 percent have incomes in the richest half. Many factors influence this intergenerational correlation, but evidence suggests that parenting practices play a crucial role. These include doing enriching activities with children, getting involved in their schoolwork, providing educational materials, and exhibiting warmth and patience. Parental behavior interpreted in this way probably accounts for around half of the variance in adult economic outcomes, and therefore contributes significantly to a country’s intergenerational mobility.
For those unfamiliar with the term, \”socioeconomic status\” or SES is common term in the social sciences. It\’s often defined a little differently across various studies, but it commonly includes some mixture of data on income, education, and occupation. This issue of Future of Children looks at a number of factors that have a bigger effect on children from low-SES families. Here are some examples:
They also look at data from the Child Development Supplement to the Panel Study of Income Dynamics (PSID-CDS) and find that the ways in which children spend time are different in lower-SES households: specifically, less time in school and family or other adults, and more time involved with media. They write:
The amount of time children spend in school has risen over the years, particularly among preschool children. Between 1981–82 and 2014, the length of time spent in preschool has almost doubled, jumping from a little over two hours per weekday to four hours. This is consistent with the rise of full-day preschool programs during this period. … We also see a large shift toward children spending much more weekend time with media, though weekday media exposure hasn’t changed much. Weekend media exposure has jumped by 62 percent among children ages 12 to 17, and by roughly 40 percent among younger children. The data show a corresponding drop in time spent with family, other adults, and peers.
Young low-SES children spent considerably more time exposed to media and considerably less time in school, as compared to higher-SES children. In fact, low-SES children between the ages of two and five spend more than twice as much time exposed to media as do high-SES children: 2.6 hours per day versus 1.2 hours per day. They also spend much less time in school: 3.7 hours per day versus 5.2 hours. … Other researchers have found especially large summer timeuse gaps across SES groups, most notably in children’s television viewing.
Other papers in the issue, like \”Peer and Family Effects in Work and Program Participation,\” by Gordon B. Dahl and \”Social Capital, Networks, and Economic Well-being,\” dig into the effects of growing up in neighborhoods with different social and peer networks.
A famous example of this difference comes from a study by Betty Hart and Todd Risley, who intensively observed the language patterns of 42 families with young children. They found that in professional families, children heard an average of 2,153 words per hour; in working-class families, the number was 1,251 words per hour; and in welfare-recipient families, it was only 616 words per hour. By age four, a child in a welfare-recipient family could have heard 32 million fewer words than a classmate in a professional family. More recent studies have clarified that the bulk of the difference in the number of words heard by children in higher- versus lower-SES families comes from words spoken directly to the children, not words said when children are present, and that the language used in higher-SES homes is more diverse and responsive to children’s speech than that in lower-SES homes. This SES-based difference in linguistic environments could plausibly contribute to SES-based gaps in children’s early language skills, especially given the robust evidence linking the quantity and quality of parents’ speech to young children to children’s early language development.
More broadly, research has found differences in parenting styles:
Mothers living in poverty display less sensitivity during interactions with their babies than do their higher-SES counterparts, and in descriptive analyses these differences explain gaps in children’s early language outcomes and behavior problems. … Authoritative parenting describes a broad style of interacting in which parents place high demands on children but also use high levels of warmth and responsiveness. Authoritarian parenting, by contrast, is characterized by strict limits on children and little warmth or dialogue, and punishment tends to be harsh. Studies have found that parents—both mothers and fathers—with more education are more likely to use an authoritative style than less-educated parents, who are likelier to use either an authoritarian style or a permissive style (characterized by “low demands coupled with high levels of warmth and responsiveness”), a pattern we see within racial and ethnic groups and in cross-country comparisons. Supporting these broad differences in style, studies have also shown that lower-income parents use more directives and prohibitions in speech with children than their middle-income counterparts do. Finally, in a large national sample, researchers saw a significant negative correlation between punitive behavior (such as yelling and hitting) and income.
The best evidence on differences in money spent on children across the socioeconomic distribution comes from two studies by Emory University sociologist Sabino Kornrich, using data from the Consumer Expenditure Survey. (This survey, conducted by the Bureau of Labor Statistics, provides data on the expenditures, income, and demographic characteristics of US consumers.) Kornrich and his colleague Frank Furstenberg found not only that parents at the top of the income distribution spend more on children’s enrichment than lower-income parents do, but also that the difference in real dollars has increased substantially since the 1970s. This spending gap has grown despite the fact that parents at all income levels are devoting an increasing share of their income to children, and that the lowest-income parents spend the largest share. …
That said, high-SES parents (especially mothers) tend to work more hours than lower-SES parents and have less discretionary time—but still spend more time with their children. This stems from fact that higher-SES parents (especially mothers) spend more of their childcare time primarily engaged in activities, while lower-SES mothers tend to spend childcare time being accessible to their children but largely engaged in housework or leisure activities. … [I]n a crossnational comparison study, highly educated mothers in many developed countries spent more time than less-educated mothers in primary child investment activities—even in Norway, where universal family policies are designed to equalize resources across parents.
The puzzle posed by Kalil and Ryan is that in survey data, families across different levels of income and education express similar beliefs about what characteristics their children will need to succeed. But on average, families with lower levels of education and income are not spending the same time or having the same success in providing these skills.
Family structure surely plays a role here as well, and Melanie Wasserman contributes an essay on \”The Disparate Effects of Family Structure.\” The big-picture patterns are probably familiar to many readers, but at least for me, they have not lost their ability to shock. From the 1960s through the 1980s, there is a sharp decline in share of children living with two-parent families, which has leveled out since about 1990.
Research indicates that growing up outside a family with two biological, married parents yields especially negative consequences for boys, with effects evident in educational, behavioral, and employment outcomes. On the other hand, the effects of family structure don’t vary systematically for white and minority youth—with the exception of black boys, who appear to fare especially poorly in families and low-income neighborhoods without fathers present. … The evidence on the disparate effects of family structure for certain groups of children may help explain certain aggregate US trends. For instance, although boys and girls are raised in similar family environments, attend similar schools, and live in similar neighborhoods, boys are falling behind in key measures of educational attainment, including high school and college completion. The fact that boys’ outcomes are particularly malleable to the family in which they’re raised provides an explanation for this disparity. … And when we’re considering policy, it’s important to emphasize that the benefits of being raised by continuously married parents don’t stem from marital status alone. Instead, parents’ characteristics, their resources, and children’s characteristics all work together. In particular, when their biological fathers have limited financial, emotional, and educational resources, children’s cognitive and behavioral outcomes are no better when they’re raised by married parents than when they’re raised by non-married parents.59 Perhaps for this reason, policies intended to encourage marriage or marriage stability among fathers with limited resources are unlikely to generate lasting benefits for children.
The issues offers a number of other angles and perspectives as well. For example, I posted a few weeks about about Daniel Hungerman\’s essay on investigates \”Religious Institutions and Economic Wellbeing,\” which explores an institution that also provides support and networking. There are a couple of articles about articles about discrimination and bias, relating to the effects on parents and families, and also whether young people grow up believing that they have an opportunity to succeed:
The Case for Income-Share Repayment of Student Loans
This underinvestment in human capital presumably reflects an imperfection in the capital market: investment in human beings cannot be financed on the same terms or with the same ease as investment in physical capital. It is easy to see why there would be such a difference. … A loan to finance the training of an individual who has no security to offer other than his future earnings is therefore a much less attractive proposition than a loan to finance, say, the erection of a building: the security is less, and the cost of subsequent collection of interest and principal is very much greater.
A further complication is introduced by the inappropriateness of fixed money loans to finance investment in training. Such an investment necessarily involves much risk. The average expected return may be high, but there is wide variation about the average. Death or physical incapacity is one obvious source of variation but is probably much less important than differences in ability, energy, and good fortune. The result is that if fixed money loans were made, and were secured only by expected future earnings, a considerable fraction would never be repaid. … The device adopted to meet the corresponding problem for other risky investments is equity investment plus limited liability on the part of shareholders. The counterpart for education would be to \”buy\” a share in an individual\’s earning prospects: to advance him the funds needed to finance his training on condition that he agree to pay the lender a specified fraction of his future earnings. In this way, a lender would get back more than his initial investment from relatively successful individuals, which would compensate for the failure to recoup his original investment from the unsuccessful. …
One reason why such contracts have not become common, despite their potential profitability to both lenders and borrowers, is presumably the high costs of administering them, given the freedom of individuals to move from one place to another, the need for getting accurate income statements, and the long period over which the contracts would run. These costs would presumably be particularly high for investment on a small scale with a resultant wide geographical spread of the individuals financed in this way. Such costs may well be the primary reason why this type of investment has never developed under private auspices. But I have never been able to persuade myself that a major role has not also been played by the cumulative effect of such factors as the novelty of the idea, the reluctance to think of investment in human beings as strictly comparable to investment in physical assets, the resultant likelihood of irrational public condemnation of such contracts, even if voluntarily entered into, and legal and conventional limitation on the kind of investments that may be made by the financial intermediaries that would be best suited to engage in such investments, namely, life insurance companies. The potential gains, particularly to early entrants, are so great that it would be worth incurring extremely heavy administrative costs. …
Individuals should bear the costs of investment in themselves and receive the rewards, and they should not be prevented by market imperfections from making the investment when they are willing to bear the costs. One way to do this is to have government engage in equity investment in human beings of the kind described above. A governmental body could offer to finance or help finance the training of any individual who could meet minimum quality standards by making available not more than a limited sum per year for not more than a specified number of years, provided it was spent on securing training at a recognized institution. The individual would agree in return to pay to the government in each future year x per cent of his earnings in excess of y dollars for each $1,000 that he gets in this way. This payment could easily be combined with payment of income tax and so involve a minimum of additional administrative expense. The base sum, $y, should be set equal to estimated average –or perhaps modal–earnings without the specialized training; the fraction of earnings paid, x, should be calculated so as to make the whole project self-financing. In this way the individuals who received the training would in effect bear the whole cost.
But there are various signs that the idea of income-contingent loans is gaining some momentum. For example, back in 1998 the United Kingdom underwent a seismic shift in higher education. It shifted away from a model where tuition was government-paid, but free to students, and toward a model where universities would charge tuition. But at the same time, it set up a program of income-contingent loans. Here a quick overview from a report by Jason D. Delisle and Preston Cooper (\”International Higher Education Rankings: Why No Country\’s Higher Education System Can Be the Best,\” American Enterprise Institute, August 2019), which I posted about last summer. They write:
In England, where the vast majority of the country’s population is concentrated, universities charge undergraduate students tuition of up to $11,856, making English universities some of the most expensive in the world. … To enable students to afford these high fees, the government offers student loans that fully cover tuition. Ninety-five percent of eligible students borrow. Repayment is income contingent; new students pay back 9 percent of their income above a threshold for up to 30 years, after which remaining balances are forgiven. Despite the lengthy term, the program is heavily subsidized: The government estimates that just 45 percent of borrowers who take out loans after 2016 will repay them in full … England’s high-resource, high-tuition model is relatively new. Until 1998, English universities were tuition-free, with the government directly appropriating the vast majority of higher education funding.
Another sign of the attractiveness of income-contingent loans is that at some institutions–Purdue University is a leading example–have started providing such loans. Tim Sablik tells the story in \”Education without Loans: Some schools are offering to buy a share of students\’ future income in exchange for funding their education\” (Econ Focus, Federal Reserve Bank of Richmond, First Quarter 2020). Rather than referring to this arrangement as an \”income-contingent loan,\” Sablik\’s article refers to it as an \”income share agreement\” or ISA:
ISAs provide students with funding to cover their education expenses in exchange for a portion of their income once they start working. Under a typical contract, recipients pledge to pay a fixed percentage of their incomes for a set period of time up to an agreed cap. For example, a student who has $10,000 of his or her tuition covered through an ISA might agree to repay 5 percent of his or her monthly income for the next 120 months (10 years), up to a maximum of $20,000. ISAs typically also have a minimum income threshold before payments kick in; if the recipient earns less than the minimum, he or she pays nothing. This means that ISAs offer students more downside protection than a traditional loan.
The United States is seeing a shift toward income-contingent loans as well. Here\’s a comment from a e Congressional Budget Office report on \”Income-Driven Repayment Plans for Student Loans: Budgetary Costs and Policy Options\” (February 2020), which I discussed in a post earlier this year:
Between 1965 and 2010, most federal student loans were issued by private lending institutions and guaranteed by the government, and most student loan borrowers made fixed monthly payments over a set period—typically 10 years. Since 2010, however, all federal student loans have been issued directly by the federal government, and borrowers have begun repaying a large and growing fraction of those loans through income-driven repayment plans.
Under the most popular income-driven plans, borrowers’ payments are 10 or 15 percent of their discretionary income, which is typically defined as income above 150 percent of the federal poverty guideline. Furthermore, most plans cap monthly payments at the amount a borrower would have paid under a 10-year fixed-payment plan. … Borrowers who have not paid off their loans by the end of the repayment period—typically 20 or 25 years—have the outstanding balance forgiven. (Qualifying borrowers may receive forgiveness in as little as 10 years under the Public Service Loan Forgiveness, or PSLF, program.)
For me, the idea of income-contingent loans is a useful way of striking a balance between financial access to higher education and protecting students from being stuck with large and lifelong debts. (There are even legal provisions for garnishing Social Security benefits to repay student loans. The idea that this step is either necessary or possible seems demented.) I also think that for many undergraduate students, telling them that they are promising to pay a certain percentage of income over the next 2-3 decades would put such loans in a more honest and open context.
Interview with Melissa Dell: Persistence Across History
Tyler Cowen inteviews Melissa Dell, the most recent winner of the Clark medal (which \”is awarded annually .. to that American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge). Both audio and a transcript of the one-hour conversation are available. From the overview:
Melissa joined Tyler to discuss what’s behind Vietnam’s economic performance, why persistence isn’t predictive, the benefits and drawbacks of state capacity, the differing economic legacies of forced labor in Indonesia and Peru, whether people like her should still be called a Rhodes scholar, if SATs are useful, the joys of long-distance running, why higher temps are bad for economic growth, how her grandmother cultivated her curiosity, her next project looking to unlock huge historical datasets, and more.
Here, I\’ll just mention a couple of broad points that caught my eye. Dell specializes in looking at how conditions in at one point in time–say, being in an area which for a time has strong centralized tax-collecting government–can have persistent effects on economic outcomes decades or even centuries later. For those skeptical of such effects, Dell argues that explaining, say, 10% of a big difference between two areas is a meaningful feat for social science. She says:
I was presenting some work that I’d done on Mexico to a group of historians. And I think that historians have a very different approach than economists. They tend to focus in on a very narrow context. They might look at a specific village, and they want to explain a hundred percent of what was going on in that village in that time period. Whereas in this paper, I was looking at the impacts of the Mexican Revolution, which is a historical conflict in economic development. And this historian, who had studied it extensively and knows a ton, was saying, “Well, I kind of see what you’re saying, and that holds in this case, but what about this exception? And what about that exception?”
And my response was to say my partial R-squared, which is the percent of the variation that this regression explains, is 0.1, which means it’s explaining 10 percent of the variation in the data. And I think, you know, that’s pretty good because the world’s a complex place, so something that explains 10 percent of the variation is potentially a pretty big deal.
But that means there’s still 90 percent of the variation that’s explained by other things. And obviously, if you go down to the individual level, there’s even more variation there in the data to explain. So I think that in these cases where we see even 10 percent of the variation being explained by a historical variable, that’s actually really strong persistence. But there’s a huge scope for so many things to matter.
I’ll say the same thing when I teach an undergrad class about economic growth in history. We talk about the various explanations you can have: geography, different types of institutions, cultural factors. Well, there’s places in sub-Saharan Africa that are 40 times poorer than the US. When you have that kind of income differential, there’s just a massive amount of variation to explain.
Nathan Nunn’s work on slavery and the role that that plays in explaining Africa’s long-run underdevelopment — he gets pretty large coefficients, but they still leave a massive amount of difference to be explained by other things as well, because there’s such large income differences between poor places in the world and rich places. I think if persistence explains 10 percent of it, that’s a case where we see really strong persistence, and of course, there’s other cases where we don’t see much. So there’s plenty of room for everybody’s preferred theory of economic development to be important just because the differences are so huge.
Dell also discusses a project to organize historical data, like old newspapers, in ways that will make them available for empirical analysis. She says:
I have a couple of broad projects which are, in substance, both about unlocking data on a massive scale to answer questions that we haven’t been able to look at before. If you take historical data, whether it be tables or a compendia of biographies or newspapers, and you go and you put those into Amazon Textract or Google Cloud Vision, it will output complete garbage. It’s been very specifically geared towards specific things which are like single-column books and just does not do well with digitizing historical data on a large scale. So we’ve been really investing in methods in computer vision as well as in natural language processing to process the output so that we can take data, historical data, on a large scale. These datasets would be too large to ever digitize by hand. And we can get them into a format that can be used to analyze and answer lots of questions.
One example is historical newspapers. We have about 25 million-page scans of front pages and editorial pages from newspapers across thousands and thousands of US communities. Newspapers tend to have a complex structure. They might have seven columns, and then there’s headlines, and there’s pictures, and there’s advertisements and captions. If you just put those into Google Cloud Vision, again, it will read it like a single-column book and give you total garbage. That means that the entire large literature using historical newspapers, unless it uses something like the New York Times or the Wall Street Journal that has been carefully digitized by a person sitting there and manually drawing boxes around the content, all you have are keywords.
You can see what words appear on the page, but you can’t put those words together into sentences or into paragraphs. And that means we can’t extract the sentiment. We don’t understand how people are talking about things in these communities. We see what they’re talking about, what words they use, but not how they’re talking about it.
So, by devising methods to automatically extract that data, it gives us a potential to do sentiment analysis, to understand, across different communities in the US, how people are talking about very specific events, whether it be about the Vietnam War, whether it be about the rise of scientific medicine, conspiracy theories — name anything you want, like how are people in local newspapers talking about this? Are they talking about it at all?
We can process the images. What sort of iconic images are appearing? Are they appearing? So I think it can unlock a ton of information about news.
We’re also applying these techniques to lots of firm-level and individual-level data from Japan, historically, to understand more about their economic development. We have annual data on like 40,000 Japanese firms and lots of their economic output. This is tables, very different than newspapers, but it’s a similar problem of extracting structure from data, working on methods to get all of that out, to look at a variety of questions about long-run development in Japan and how they were able to be so successful.