The Invention of Peaches and Pimentos

The peach, in the form that Americans know it, was invented in 1875, in the state of Georgia, by Samuel Rumph. Until then, something called a “peach” existed, but it was barely a commercial product, because it couldn’t be shipped. But after years of tinkering and cross-breeding with fruit that no one much wanted, Rumph developed the Elberta peach (named after his wife) in 1875. Cynthia R. Greenlee tells the story in “Reinventing the Peach, the Pimento, and Regional Identity” (Issues in Science and Technology, Summer 2022). She writes:

Just how Rumph begat this new peach is uncertain. It was succulent and bright yellow with red markings. Its pit came out easily, and its fruit matured early in the season. That timing and its firmness were boons, and the trees yielded their large, handsome fruit prolifically. As historian Thomas Okie wrote in his rigorous and compelling study of how the peach became a Georgia icon, Rumph had produced the “industrial peach,” a reliable producer that was reasonably good to eat, relatively resistant to pests and diseases, amenable to growing in different climes and soil, and easily transportable. 

As a pioneer of what would eventually become agribusiness, Rumph considered the whole peach, from grafting to delivery, and intervened at various stages in the supply chain. First, he bred the peach that took the world by storm. Then, as a member of the Georgia State Horticultural Society’s committee on packing and shipping peaches, Rumph devoted himself to studying how to send peaches around the country. Although the first shipment of peaches to New York had happened around the time of Rumph’s birth in the 1850s, shipping still bedeviled the peach grower. Picked too green, they lost flavor when refrigerated. Too ripe, and they rotted almost immediately after emerging from cold shipment.

It wasn’t long before Rumph reported making a successful shipment of peaches to New York, offering proof of concept that Georgia peaches could ride the railways well and sell high, even though it was an arduous journey for the fruit: usually three days total of trains and transfer to steamers. In an effort to make shipping a precise science rather than a gamble, Rumph created a slatted crate that could be stacked and wheeled, founding the Elberta Crate Company. His unpatented invention spawned industrywide imitation, and he went on to invent a refrigerated railway car—also unpatented—that was widely used by fruit growers thereafter.  Rumph’s industry-changing shipping inventions established a durable and productive connection between fruit growers, the state, and industry. Railroads were booming across the South, buoyed by ample northern investment. And peach growers’ earnings—and nurserymen’s active involvement in politics—determined where railroads would go and stop. 

As Greenlee points out, the arrival of the peach altered the economic position and public view of Georgia. It wasn’t long before the same infrastructure of agricultural development and processing started by the peach led to crops of melons, berries and other fruits and vegetables. Georgia boasted that it was a place where both apples and oranges could grow–a pointed jab at northern orchards. The crops and best-practice growing technologies were spread by agricultural extension services.

Greenlee tells the story of another Georgia agricultural family, Samuel Riegel and his sons, who were to the pimento what Rumph was to the peach. In 1905, the family became obsessed with growing peppers. They got their US Congressman to obtain seeds from Europe, which the bred and cross-bred, and by 1911 they were distributing pimento seeds. Again, a chain of complementary innovations was important, and here one of Riegel’s sons took the lead:

Still, a particularly vexing wrinkle marred Perfection’s flawlessness, one common to all pimentos: thick, hard-to-process skin. It had to be softened with lye or burned off in a fire, then peeled by hand.  The other Riegel son, Mark, who worked briefly for the experiment station, thought there had to be a better way. He invented a roasting machine that ferried peppers on a continuous chain through a line of fire, turning the skin burnoff into a quicker mass process. Like Rumph before him, he attended to the supply chain end to end. Not only did Mark Riegel invent a process, he established a string of canneries that induced growers to cultivate peppers on contract. From the seed to the jar, the Riegels had cultivated the “perfect” pimento, enlisted farmers to grow their marvel, developed a better processing method, and created upbeat marketing campaigns with their Sunshine brand.

The overall lesson here is that a successful innovation is a multifaceted process. It often involves individuals willing to take the risks of experimentation and research, but those individuals often have a better chance if they are supported by an invisible infrastructure of science and shared information. The ultimate success depends not just on the product itself, but also on complementary innovations: processing, packaging, transportation. Success also depends on nurturing demand for the new product, often via marketing and publicity efforts. Finally, when these general ingredients are in place, one successful innovation can light the way to others.

Immigrants Assimilating, Then and Now

Noah Smith interviews Leah Boustan about her research on various aspects of immigration at his Noahopinion website (July 17, 2022). In answer to a question about “the biggest popular misconceptions about immigration in America today,” Boustan responds:

Americans vastly overestimate how many immigrants are in the country today. According to a survey conducted by Stefanie Stantcheva and her co-authors, Americans guess that 36% of the country is born abroad, whereas the real number is 14%. So, this misconception gives rise to fears that we are in an “immigration crisis” or that we have a “flood” of immigrants coming to our shores. In reality, the immigrant share of the population today (14%) only just reached the same level as it was during the Ellis Island period for over 50 years! After this, I would say that the second biggest misconception is that immigrants nowadays are faring more poorly in the economy and are less likely to become American than immigrants 100 years ago.

On the issue of how immigrants do at catching up economically:

We find that Mexican immigrants and their children achieve a substantial amount of integration, both economically and culturally. First, on the economic side, we compare the children of Mexican-born parents who were raised at the 25th percentile of the income distribution — that’s like two parents working full time, both earning in the minimum wage — to the children of US-born parents or parents from other countries of origin. The children of Mexican parents do pretty well! Even though they were raised at the 25th percentile in childhood, they reach the 50th percentile in adulthood on average. Compare that to the children of US-born parents raised at the same point, who only reach the 46th percentile. Of course, children of other immigrant backgrounds do even better, but the children from Mexican households are experiencing a lot of upward mobility. …

[T]he pattern … whereby the kids of poor and working-class immigrants do better than their American counterparts, is true both today and in the past. The children of poor Irish or Italian immigrant parents outperformed the children of poor US-born parents in the early 20th century; the same is true of the children of immigrants today. 

We are able to delve into the reasons for this immigrant advantage in the past in great detail, and we find that the single most important factor is geography. Immigrants tended to settle in dynamic cities that provided opportunities both for themselves and for their kids. So, in the past, this meant avoiding Southern states, which were primarily agricultural and cotton-growing at the time, and – outside of the South – moving to cities more than to rural areas. If you think about it, it makes sense: immigrants have already left home, often in pursuit of economic opportunity, so once they move to the US they are more willing to go where the opportunities are. 

Geography still matters a lot today, but not as much as in the past. Instead, we suspect that educational differences between groups matter today. Think about a Chinese or Indian immigrant who doesn’t earn very much, say working in a restaurant or a hotel or in childcare. In some cases, the immigrant him or herself arrived in the US with an education – even a college degree – but has a hard time finding work in their chosen profession. Despite the fact that these immigrant families do not have many financial resources, they can pass along educational advantages to their children.

On the issue of how immigrants assimilate culturally, Boustan comments:

We are economists, so the first work we did on immigration was focused on economic outcomes like earnings and occupations. But, voters often care more deeply about cultural issues – both in the past and today. So, we realized that we wanted to try to measure ‘fitting in’ or cultural assimilation using as many metrics as we could find. We looked at learning English, of course, but also who immigrants marry, whether immigrants live in an enclave neighborhood or a more integrated area, and – one of our favorite measures – the names that immigrant parents choose for their children. These are all measures that can be gathered for immigrants today and 100 years ago; there are other metrics for today that don’t exist for the past — like ‘do immigrants describe themselves as patriotic’ (answer is: they do).

What we learned is that immigrants take steps to ‘fit in’ just as much today as they did in the past. So, for example, we can look at the names that immigrant parents choose for their kids. Both in the past and today, immigrants choose less American-sounding names for their kids when they first arrive in the US, but they start to converge toward the names that US-born parents pick for their kids as they spend more time in the country. Immigrants never completely close this ‘naming gap’ but they move pretty far in that direction, both then and now. 

The No-Burn Forest Policy: Origins and Consequences

The western United States has experienced some extraordinarily large forest fires in recent years. Part of the reason is drought conditions that have left the landscape tinder-dry. But another part of the reason is a century-long legacy of shutting down forest fires–even controlled burns. The PERC Reports magazine considers the history and consequences in a symposium on “How to Confront the Wildfire Crisis” in the Summer 2022 issue.

For example, Brian Yablonski discusses the origins of the policy in “The Big Burn of 1910 and the Choking of America’s Forests.” He writes:

Record low precipitation in April and May [1910] coupled with severe lightning storms in June and sparks from passing trains had ignited many small fires in Montana and Idaho. More than 9,000 firefighters, including servicemembers from the U.S. Army, waged battle against the individual fires. The whole region seemed to be teetering on the edge of disaster. Then, on August 20, a dry cold front brought winds of 70 miles per hour to the region. The individual fires became one. Hundreds of thousands of acres were incinerated within hours. The fires created their own gusts of more than 80 miles per hour, producing power equivalent to that of an atomic bomb dropped every two minutes.

Heroic efforts by firefighters to save small mountain towns and evacuate their people became the stuff of legend. “The whole world seemed to us men back in those mountains to be aflame,” said firefighter Ed Pulaski, one of the mythical figures to emerge from the Big Burn. “Many thought it really was the end of the world.” Smoke from the Mountain West colored the skies of New England. In just two days, the Big Burn torched an unfathomable 3 million acres in western Montana and northern Idaho, mostly on federally owned forest land, and left 85 dead in its wake, 78 of them firefighters. The gigafire-times-three scarred not only the landscape, but also the psyche of the Forest Service, policymakers, and ordinary Americans.

After the Big Burn, forest policy was settled. There was no longer any doubt or discussion. Fire protection became the primary goal of the Forest Service. And with it came a nationwide policy of complete and absolute fire suppression. In the years to follow, the Forest Service would even formalize its “no fire” stance through the “10 a.m. rule,” requiring the nearly impossible task of putting out every single wildfire by 10 a.m. the day after it was discovered. The rule would stay in effect for most of the century.

Yablonski pointed me to an essay called “Fire and the Forest—Theory of Light Burning” by FE Olmsted, published in the January 1911 issue of the Sierra Club Bulletin (and available via the magic of HathiTrust).  Olmsted is one of the seminal figures in American forestry, who is credited as one of the founders of the US National Forest System, and also taught forestry at Harvard. In this essay, Olmstead is writing just after the Big Burn of 1910, and he is arguing against “light burning” in favor of the fullest possible suppression of forest fires. Olmsted is find with burning an area after logging has occurred, to clean it up for new growth. But he argues that more extensive “light burning” will wipe out young trees, which is a waste of timber that could be cut in the future. He wrote:

Public discussion of the matter has brought to light, among other things, the fact that certain people still believe in the old theory of “burning over the woods” periodically in order to get rid of the litter on the ground, so that big fires which may come along later on will find no fuel to feed upon. This theory is usually accompanied by reference to the “old Indian fires” which the redman formerly set out quite methodically for purposes connected with the hunting of game. We are told that the present virgin stands of timber have lived on and flourished in spite of these Indian fires. Hence, it is said, we should follow the savage’s example of “burning up the woods” to a small extent in order that they may not be burnt up to a greater extent bye and bye. Forest fires, it is claimed, are bound to run over the mountains in spite of anything we can do. Besides, the statement is made that litter will gradually accumulate to such an extent that when a fire does start it will be impossible to control it and we shall lose all our timber. Why not choose our time in the fall or spring when the smaller refuse on the ground is dry enough to burn, the woods being damp enough to prevent any serious damage to the older trees, and burn the whole thing lightly? This theory of “light burning” is especially prevalent in California and has cropped out to a very noticeable extent since the recent destructive fires in Idaho and Montana.

The plan to use fire as a preventive of fire is absolutely good. Everything depends, however, upon how it is used. The Forest Service has used fire extensively ever since it assumed charge of the public timber lands in California. We are selling 200,000,000 feet of timber and on all the lands which we logged over we see to it that the slashings and litter upon the ground are piled up and burned. This must be accomplished, of course, in such a way that no damage results to the younger tree growth, such as seedlings, saplings, thickets and poles of the more valuable species. If we should burn without preparing the ground beforehand, most of the young trees would be killed. …

With the exception of two or three lumber companies the Forest Service is the only owner of timber in the State of California which has used and is using fire in a practical way for cleaning-up purposes. What “light burning” has been done on private lands in California, accompanied by preparation of the ground beforehand, shows that wherever the fire has actually burned, practically all young trees up to fifteen years of age have been killed absolutely, as well as a large part of those between the ages of fifteen and forty years. The operation, to be sure, has resulted in cleaning up the ground to a considerable extent and will afford fairly good protection to mature trees in case they are threatened by fire in the future. If a fire comes along it will naturally not have as much rubbish to feed upon and may not be so hot as to injure the larger tree growth. In other words, a safeguard has been provided for timber which may be turned into dollars in the immediate future. With this advantage has come the irreparable damage to young trees. It has amounted, in fact, to the almost total destruction of all wood growth up to the age of twenty years. This is not forestry; not conservation; it is simple destruction. That is the whole story in a nutshell.

The private owner of timber, whose chief concern is the protection of trees which can be turned into money immediately and who cares little or nothing about what happens to the younger stuff which is not yet marketable, may look upon the “light burning” plan as being both serviceable and highly practicable, provided the expense is reasonable. On the other hand, the Government, first of all, must keep its lands producing timber crops indefinitely, and it is wholly impossible to do this without protecting, encouraging, and bringing to maturity every bit of natural young growth. …

The accumulation of ground litter is not at all serious and the fears of future disastrous fires, as a result of this accumulation, are not well founded. Fires in the ground litter are easily controlled and put out. On the other hand, fires in brush or chaparral are very dangerous, destructive, and difficult to handle. Brush areas under and around standing timber are the worst things we have to contend with. Brush is not killed by fire; it sprouts and grows up again just as densely as before. The best way to kill brush is to shade it out by tree growth, but to do this we must let young trees grow. Fires and young trees cannot exist together. We must, therefore, attempt to keep fire out absolutely. Some day we will do this and just as effectively as the older countries have done it for the past 100 years. In the mean time we are keeping fires down in California by extinguishing them as soon as possible after they begin.

It is true that fires will always start; that we can never provide against. On the other hand, the supposition that they will always run is not well taken. If we can stop small fires at the start, fires will never run. With more men, more telephones, and more trails we shall be able to do this and at a cost of only a cent or two more an acre.

After about a century of following Olmsted’s prescription that “[w]e must, therefore, attempt to keep fire out absolutely,” and that this is possible, we are not confronted with waves of historically large wildfires. Notice that Olmsted’s reasoning for stopping all fires, may sound like a form of conservationism, is actually based implicitly on the idea that all the forests will be regularly logged!

Blocking fires changes the character of the forest. One of the other essays in the issue refers to a 2022 done by some authors at the US Forest Service, “Operational resilience in western US frequent-fire forests,” published in Forest Ecology and Management. They look at density of western US forests, and find that the policy of fire suppression has led to much more density (which I suppose Olmsted would view as a successful story of trees waiting to be logged), but also generally smaller trees as the forest growth becomes more competitive. They write:

With the increasing frequency and severity of altered disturbance regimes in dry, western U.S. forests, treatments promoting resilience have become a management objective but have been difficult to define or operationalize. Many reconstruction studies of these forests when they had active fire regimes have documented very low tree densities before the onset of fire suppression. Building on ecological theory and recent studies, we suggest that this historic forest structure promoted resilience by minimizing competition which in turn supported vigorous tree growth. To assess these historic conditions for management practices, we calculated a widely-used measure of competition, relative stand density index (SDI), for two extensive historical datasets and compared those to contemporary forest conditions. Between 1911 and 2011, tree densities on average increased by six to seven fold while average tree size was reduced by 50%. Relative SDI for historical forests was 23–28% of maximum, in the ranges considered ‘free of’ (<25%) to ‘low’ competition (25–34%). In contrast, most (82–95%) contemporary stands were in the range of ‘full competition’ (35–59%) or ‘imminent mortality’ (≥60%). Historical relative SDI values suggest that treatments for restoring forest resilience may need to be much more intensive then the current focus on fuels reduction. 

Their findings are consistent with observations on the ground. For example, Yablonski’s essay in PERC Reports mentions the views of the head of a Californian tribe, the North Fork Mono:

According to Ron Goode, tribal chairman of the North Fork Mono, prior to white settlement, Native Americans carried out “light burning” on 2 percent of the state annually. As a result, most forest types in California had about 64 trees per acre. Today, it is more common to see 300 trees per acre. This has led to a fiery harvest of destruction—bigger, longer, hotter wildfires.

Several implications follow. One is that modern forestry practices since the Big Burn of 1910 have substantially changed the character of western forests. I suppose one can argue over whether the change is on balance good or bad, but the fact of the change itself is established.

The combination of stopping burning for a century or so, along with the resulting heavy growth of trees, along with the decades-long accumulation of dead timber, along with recent years of drought conditions, have set the stage for major fires. Yablonski writes: “Fires that burn more than 100,000 acres are becoming commonplace in America. Nowhere is that more evident than in California. Throughout the 20th century, there were 45 megafires recorded in the state. In the first 20 years of this century, there have already been 35—seven in 2021 alone.”

The century-old policy of putting out every wildfire within a day of its discovery has clearly failed. The alternatives to reduce the potential fuel load for forest fires are much more widespread logging or controlled burning. Another essay in PERC Reports, by Tate Johnson, is called “Returning Fire to the Land.” Johnson describes the current situation in South Carolina:

One morning last March [2022], the South Carolina Forestry Commission website displayed the number of active fires in the state: 163. An interactive map showed each fire, represented by markers that ranged from red to orange to yellow to teal. In contrast to similar maps that are followed closely throughout the summer, particularly in the West, the markers didn’t represent wildfires. Indeed, South Carolina’s wildfire tracker showed zero active that day. Rather, these were “good” fires: prescribed burns that had been planned in advance, set deliberately, and aimed to achieve specific land management objectives, typically to control vegetation and reduce hazardous fuels.

“It’s gonna burn one day or another,” says Darryl Jones, forest protection chief of the South Carolina Forestry Commission, “so we should choose when we burn it and make sure we do it on the right days when it’s most beneficial.” He adds that the idea is to “burn an area purposely before it can burn accidentally.”

The different colors of map markers signified the purpose of each fire. Some burns aimed to improve wildlife habitat by stimulating seed production, clearing out a landscape’s lower layer of growth, or creating forest openings. Others were set to clear crop fields in preparation for planting or to burn debris piles that had been gathered and stacked. Still more were tagged “hazard reduction”: fires set to remove dangerous accumulations of pine needles, briars, shrubs, and other fuels that naturally build up in southern forests. Spring is the prime time to burn given its favorable conditions for wind, temperature, humidity, and fuel, although the burn window can extend earlier or later into the year.

Is a Recession Defined as “Two Negative Quarters”?

The US economy is likely to show negative growth of its gross domestic product for the first two quarters of 2022. In late June, the Bureau of Economic Analysis estimated based on updated evidence that GDP shrunk at an annualized rate of 1.6% in the first quarter of 2022. The first preliminary estimate from BEA about growth for the second quarter of 2022 come out on Thursday morning. But according to estimates from the Federal Reserve Bank of Atlanta, which has a “nowcasting” model that tries to estimate economic changes in real time, the announcement will likely be another decline of 1.6% in the second quarter of 2022.

My question here is one of nomenclature and analysis: Does two quarters of declining GDP mean that the US economy is in a recession? After all, the unemployment rate in June 2022 was 3.6%, which historically would be viewed as a low level. The number of jobs in in the US economy plummeted during the short pandemic recession from 152.5 million in February 2020 to 130.5 million in April 2020, but since then has been rising steadily and is up to $152 million in June 2022. Similarly, the labor force participation rate of the US economy dropped from 63.4% in February 2020 to 60.2% in April 2020, but has rebounded since then and has been in the range of 62.2-62.4% in recent months. So can you have a “recession” that happens simultaneously with low unemployment rates and a rising number of jobs?

The definition of a “recession” is not a physical constant like the boiling point of water. It is quite common to define a recession as “two quarters of negative GDP growth.” But there is actually no US-government approved definition of “recession.” In the US context, the most commonly used recession dating are determined by an group of academic economists calling under the auspices of the National Bureau of Economic Research. For example, here’s my post about the NBER announcement that February 2020 was the end of the previous economic upswing, and here’s my post about the NBER announcement that the pandemic recession was only two months long.

The White House Council of Economic Advisers has recently referred to the NBER Business Cycle Dating committee as “the official recession scorekeeper,” but this is incorrect. Although US business cycle dates have been based on NBER researcher for a long time, going back to the Great Depression, an organized NBER Business Cycle Dating committee wasn’t formed until 1978. It is not authorized by law. In fact, the lack of an official definition is probably a good thing, because it’s good to keep economic statistics out of the hands of politicians, and the there are obvious political implications to pronouncing on the dates when a recession has started, is ongoing, or has stopped. As on recent example, if a “recession” was strictly defined as a decline of two quarters in GDP growth, then the Trump administration would have been justified in saying that the US economy did not have a pandemic “recession” at all.

However, there a number of situations where a recession is in fact defined as two negative quarters for GDP grow. For example, here’s a Eurostat publication stating: “A recession is normally defined in terms of zero or negative growth of GDP in at least two successive
quarters.”
Here are two IMF economists stating: “Most commentators and analysts use, as a
practical definition of recession, two consecutive quarters of decline in a country’s real (inflation adjusted) gross domestic product (GDP)
…” Here’s a Bank of India report (see p. 28) referring to two quarters of negative GDP growth as a “technical recession.” Here’s a “glossary” from the UK Treasury defining a recession: “The commonly accepted definition of a recession in the UK is two or more consecutive quarters (a period of three months) of contraction in national GDP.” Here’s a short comment from the World Economic Forum stating: “There is no official, globally recognized definition of a recession. In 1974, the US economist Julius Shiskin described a recession as `two consecutive quarters of declining growth’, and many countries still adhere to that.”

In short, if the US economy does experience two consecutive quarters of declining GDP growth and some commenters choose to call that a “recession,” those commenters have plenty of justification–as a matter of nomenclature–for doing so.

But setting aside the issue of whether people can find a justification for using the word, is the term “recession” appropriate for a low-unemployment US economy experiencing a surge of inflation? Julius Shisken, mentioned above, was the Commissioner of the US Bureau of Labor Statistics back in 1974, when he wrote an article for the New York Times about dating recessions. He wrote:

A rough translation of the bureau’s [NBER’s] qualitative, definition of a recession into a quantitative one, that almost anyone can use, might run like this:

In terms of duration—declines in real G.N.P. for 2 consecutive quarters; a decline in industrial production over a six‐month period.

In terms of depth—A 1.5 per cent decline in real G.N.P.; a 1.5 per cent decline nonagricultural employment; a two‐point rise in unemployment to a level of at least 6 per cent.

In terms of diffusion—A decline in nonagricultural employment in more than 75 per cent of industries, as measured over six‐month spans, for 6 months or longer.

The specific criteria used by NBER have evolved over time (see here and here), but the general sense that a “recession” should include more than just GDP statistics has continued. Shiskin also wrote in 1974:

The bureau’s [NBER] definition of a recession is, however, known to only a small number of specialists in business cycle studies. Many people use a much simpler definition—a two‐quarter decline in Real G.N.P. while this definition is simplistic, it has worked quite well in the past. … The public at large now appears to use the term recession to describe a period of economic distress, such as we have had in recent months. 

As someone who tries to keep my categories clear, I would not yet refer to the US economy as “in a recession,” even if the GDP numbers later this week show two consecutive quarters of decline. In my mind, not all economic distress is properly called a “recession.” The current problems of the US economy, it seems to me, are a mixture of the surge of inflation that is driving down the buying power of real wages for everyone, and the ongoing adjustment of labor markets, supply chains, and firms to the aftereffects of the pandemic. But I also wouldn’t criticize too loudly those who apply the “recession” label based on two quarters of negative real GDP growth. Moreover, efforts by the Federal Reserve to choke off inflation with a series of interest rate increases have contributed to a number of US recessions since World War II, so there is a real risk that in the coming months, the US economy will end up experiencing the combination of lower output and job losses that will qualify as a “recession” by any definition.

Some Economics of Codetermination

Codetermination refers to the direct worker involvement in management and governance of firms. In economic theory, one can make an argument for either positive or negative effects of codetermination. The case in favor is that workers have detailed information about actual jobs and processes in a way that management can never quite match; in addition, workers will always have some concern about whether they are likely to be replaced by technology or by cheaper workers. Thus, if codetermination offers an incentive for workers to tap into their detailed knowledge in a way that benefits the firm, and also offers incentives to workers to invest in their relationship with the employer, it could unlock productivity, wage, and profitability gains.

On the other side, current workers do not have a direct reason to take the welfare of either future workers or the owners of the firm into account. Thus, if codetermination leads workers to maximize their wages now, at the cost of lower investments in the future of the firm, the firm can end up immobilized by worker-management disputes and ultimate worse off.

What does the evidence say? Simon Jager, Shakked Noy, and Benjamin Schoefer provide an overview of the evidence on European rules in “What Does Codetermination Do?” (ILR Review, August 2022, 75(4), pp. 857–890). Perhaps strangely, they find little effect at all. To set up their conclusion, it’s useful to backfill for a moment and describe what’s involved. For example, this is board-level codetermination, European-style:

Existing board-level representation laws almost always grant workers a minority position on the board—usually 20 to 40% of the seats (ETUI 2020). The notable exception is in Germany: Although German firms with between 500 and 2,000 employees must allocate only 33% of board seats to workers, firms with more than 2,000 employees are subject to ‘‘quasi-parity’’ representation, meaning that 50% of seats go to workers, but shareholders
receive a tie-breaking vote. Uniquely to Germany and for historical reasons going back to the aftermath of World War II, firms with more than 1,000 employees in the iron, coal, and steel sectors are subject to full parity representation, with no casting vote for shareholders.

There is also “shop-floor representation:”

Shop-floor representation laws vary widely in the formal authority they give to worker representatives. Employers are usually required to inform and consult shop-floor representatives in advance about decisions regarding working hours, working conditions, or the recruitment, transfer, or dismissal of employees (Aumayr et al. 2011). These requirements do not convey any substantive authority to workers, but may create implicit pressure on employers to reach a consensus with workers. Some countries additionally
give shop-floor representatives narrow rights to appeal to employment courts to overturn employer decisions (Van het Kaar 1997; Visser 2021). Several countries, including Germany, Austria, Sweden, Norway, and the Netherlands, grant shop-floor representatives more substantive co-decisionmaking powers (Visser 2021). For example, in Germany, shop-floor ‘‘works councils’’ have a right to participate in decisions about working hours, leave
arrangements, the introduction of productivity-monitoring technology, and performance-related pay (Addison, Schnabel, and Wagner 2001). They can also veto ‘‘unwarranted’’ dismissals of staff, in which case the employer must bring the matter to a labor court if they wish to override the veto.

From the abstract, the authors sum up the evidence on board- and shop-level codetermination this way:

The available micro evidence points to zero or small positive effects of codetermination on worker and firm outcomes and leaves room for moderate positive effects on productivity, wages, and job stability. The authors also present new country-level, general-equilibrium event studies of codetermination reforms between the 1960s and 2010s, finding no effects on aggregate economic outcomes or the quality of industrial relations. They offer three explanations for the institution’s limited impact. First, existing codetermination laws convey little authority to workers. Second, countries with codetermination laws have high baseline levels of informal worker voice. Third, codetermination laws may interact with other labor market institutions, such as union representation and collective bargaining.

As this explanation implies, a difficulty in evaluating codetermination is that it comes packaged with other attitudes and laws. In a country with codetermination as an ongoing situation, both managements and workers will necessarily become accustomed to it. In a country with codetermination and also widespread union membership and power, separating out the effect of codetermination may be difficult. Thus, for American advocates of codetermination, figuring out its effects in the very different context of US labor relations isn’t straightforward.

My own sense is that many of the arguments over codetermination tend to get into discussions of whether workers should have much greater decision-making power in firm governance. Some favor that outcome; some don’t. My point is that co-determination in the European sense isn’t actually about decisive shifts in worker power over corporations: instead, it’s more accurately viewed as providing a required and formalized structure for flows of information and feedback, so that workers and management are forced into forums where they communicate with each other.

Carbon Pricing and Coverage Around the World

The global movement toward carbon pricing is partial and halting, but real. Ian Parry, Simon Black, and Karlygash Zhunussova provide some useful background and analysis in “Carbon Taxes or Emissions Trading Systems? Instrument Choice and Design (July 21, 2022, IMF Staff Climate Note 2022/006).

This figure illustrates carbon pricing around the world. Putting a price in carbon emissions can happen in different ways: for example, through a carbon tax or through an emissions trading system (where emitters must own permits for the quantity of their emissions, and these permits can be bought and sold). The horizontal axis shows what share of carbon emissions from that jurisdiction are covered by a carbon price. The horizontal axis shows the carbon price converted to US dollars per ton of CO2-equivalent emissions (that is, if it’s a greenhouse gas that isn’t carbon, it’s converted to an equivalent).

The world average for carbon pricing is the red point near the bottom left: that is, about 25% of global carbon-equivalent emissions are covered by some form of carbon pricing, at an average price of about $25 per metric ton of carbon-equivalent emissions. Some countries have coverage of most emissions, but at a very low price (Japan, South Africa). Others have coverage of only a small share of their emissions, but at a high price (Uruguay). In the upper right are the countries with high coverage and a high carbon price (Sweden and Liechtenstein).

There are of course lots of subsidiary questions here, which are discussed by the authors. Here’s quick flavor of some of the questions.

There is the question of what policy tool is preferable. A carbon tax has the advantage that it clarifies the path of future prices, which helps private firms and households in making their choices about how to adapt. But a carbon tax doesn’t directly set a limit on carbon emissions. A system of emissions trading permits does set a quantitative limit. But the carbon price is then determined by interactions with the market for permits, which creates a degree of uncertainty for future planning. And of course, either approach can be enacted with a variety of loopholes, grandfather clauses, and offsets that could weaken its force.

There is the question of overlap with carbon pricing tools and other policy mechanisms. For example, consider the possibility that carbon taxes were substituted, in part, for existing fuel taxes, or for existing miles-per-gallon standards for cars. The economic incentives would shift toward a more direct focus on carbon emissions. A political tradeoff along these lines might bring some who are dubious about carbon pricing on-board.

Finally, there is a question of what a broad-based tax for all greenhouse gas emissions looks like. Applying a carbon price (whether via tax or emissions trading system) to direct production of fossil fuels (or to usage of imported fossil fuels) is fairly straightforward. But what about provisions that affect methane that is now released as part of fossil fuel production? What about greenhouse gas emissions (say, by various soil practices and livestock) or generated by agriculture or by industry (say, by cement manufacturing)?

Learning Loss During the Pandemic

A few months ago, I ran across a comment from a K-12 educator talking about students who had fallen behind during the pandemic. I can’t seem to track down the comment, but it was quite positive and gung-ho, claiming that the school system knew what needed to be done to help these students catch up. Maybe I’m just a surly negative-minded grinch, but I didn’t believe it. After all, there are lots of students who were already falling behind before the pandemic, and that pattern has gone on for a long time. It seems clear to me that school systems have not in fact shown that they know what is needed to help students catch up.

The problem is real. Harry Anthony Patrinos, Emiliana Vegas, Rohan Carter-Rau provide an overview of the studies done so far about learning loss in the first two years of the COVID pandemic in “An Analysis of COVID-19 Student Learning Loss” (May 2022, World Bank Policy Research Working Paper 10033). A short readable overview of the study by the authors is available here.

This paper is a survey of existing research. The authors did a search for research papers from around the world that offered estimates of COVID-related learning loss for K-12 students. They hunted down 61 papers, but some of those just compared before-and-after test scores, sometimes with very small samples or with no effort to adjust for potentially confounding variables. For example, if schools serving certain groups of students were more or less likely to close during the pandemic, or if the families of certain groups of students reacted differently to the pandemic, such differences should be taken into account. Thus, the authors focused mainly on the 35 studies with large sample sizes that made some effort to sort out other factors. They write:

Our final database consists of 35 robust studies and reports documenting learning loss,
representing data from 20 countries … Most studies (32) find evidence of learning
loss. Of the 35 studies reporting learning loss, 27 reported findings in a comparable effect size format. As shown in Figure 1, most studies found learning losses ranging from 0.25 to 0.12 SD. In five studies, learning losses were even greater. The average learning loss across these studies is 0.17 standard deviation – which equates to over half a school year of learning loss.

The studies consistently find heterogeneous effects of learning loss by socio-economic status, past academic learning, and subject of learning. In our review, 20 studies examined learning loss by socio-economic status. Of these, 15 find greater learning loss among students or schools with lower socio-economic status, while 5 fail to find a statistically significant difference. Many studies have also found learning loss to be worse for students who had struggled academically prior to the pandemic. …

Learning losses are significant in countries reporting losses through descriptive statistics. For example, adolescent girls’ literacy and numeracy scores declined by more than 6 percent in Bangladesh (Amin et al 2021). In India, the share of grade 3 children in government schools who can perform simple subtraction decreased from 24 percent in 2018 to only 16 percent in 2020 and the share who can read a grade 2 level text decreased from 19 percent in 2018 to 10 percent in 2020 (ASER 2021a). In Pakistan, the proportion of children in classes 1-5 who can read a story declined from 24 percent in 2019 to 22 percent in 2021 (ASER 2021b). There are reported learning losses in one college in Sri Lanka (Sayeejan and Nithlavarnan 2018). In Uganda, the percentage of learners rated proficient in literacy in English and numeracy in 2021 dropped by 5 percent and
13 percent from that of 2018 (NAPE 2021). In Canada, grade 2 and 3 students reading assessments declined by 4 to 5 points (Georgiou 2021). In the Republic of Korea, there was a significant decrease in scores in for medical school students (Kim et al. 2021).

The McKinsey consulting firm provides some additional background in its report, “How COVID-19 caused a global learning crisis” (April 2022). Here’s a figure showing the patterns of school shutdowns around the world.

The McKinsey authors describe the global patterns in this way:

  • High-performing systems, with relatively high levels of pre-COVID-19 performance, where students may be about one to five months behind due to the pandemic (for example, North America and Europe, where students are, on average, fourmonths behind).
  • Low-income prepandemic-challenged systems, with very low levels of pre-COVID-19 learning, where students may be about three to eight months behind due to the pandemic (for example, sub-Saharan Africa, where students are on average six months behind).
  • Pandemic-affected middle-income systems, with moderate levels of pre-COVID-19 learning, where students may be nine to 15 months behind (for example, Latin America and South Asia, where students are, on average, 12 months behind).

These changes may not seem huge, but they are. For example, consider for a moment what it would be worth if, on average, every student had accomplished the equivalent of an additional year of academic learning upon graduation. For education reform policymakers, this kind of outcome would be like hearing angels sing. Now consider the alternative, where on average students are a half-year or a year behind.

The pandemic-related learning loss should be considered a national catastrophe, with an aggressive response that goes well beyond getting children back to in-person learning. Curriculums need adjusting, because so many students won’t be ready for what they would normally have been taught. There should be plans and funding for longer school days and summer sessions for the next several years, especially for students who were already falling behind even before the pandemic. Small-group and in-person tutoring is one method that does seem to help students catch up, and a dramatic expansion of such programs–bringing in volunteers from retirees to parents to college and high school students, who can operate both in-person and on-line, is badly needed. But my surly, negative-minded grinchy self suspects that we are just going to muddle through the pandemic learning loss issues, without dramatic changes, and that will be a costly and long-term mistake.

Interested readers may also want to check out an overview article discussing these studies and others in the Economist magazine (July 7, 2022).

50 Years of US Environmental Policy

The US environmental movement has deep historical roots: as one example, Yellowstone National Park was created by an act of Congress and signed into law by President Ulysses S. Grant in 1872. However, the modern era of US environmental policymaking is usually dated to a wave of federal laws passed in the early 1970s. Joseph S. Shapiro offers a discussion of “Pollution Trends and US Environmental Policy: Lessons from the Past Half Century” (Review of Environmental Economics and Policy, Winter 2022, 16:1, pp. 42-61).

The basic story is one of policy success: that is, a number of major environmental pollutants have been dramatically reduced, and environmental policy is a prominent cause of those reductions. Shapiro describes some of the trends in environmental pollutants this way:

Indeed, the data indicate that ambient concentrations of some air pollutants have fallen dramatically in recent decades. Between 1980 and 2019, concentrations of carbon monoxide, lead, and sulfur dioxide each fell by more than 80 percent; there have been similar declines in particulate matter, but ground-level ozone concentrations have fallen by only a third … The most common indicators of surface water pollution have decreased substantially since the 1970s. For example, between 1972 and 2014, fecal coliforms (a measure of bacteria associated with human and animal wastes) fell by about two-thirds. Total suspended solids (a measure of the total particles suspended in water), which reflects a wide range of pollutants, fell by about a third over the same period … Data from the Toxics Release Inventory (TRI) show sustained and large declines in emissions of many types of toxic pollution to water, air, and land.

Environmental data is admittedly imperfect, but the overall pattern seems clear. The main exception to this pattern of sharp declines in pollution is carbon emissions, which instead has experienced smaller and more recent declines. Shapiro writes:

These data show steady increases in US CO2 emissions through 2008, followed by a gradual decline through 2021 (US EPA 2021d). The decline that occurred immediately after 2008 was likely due to the Great Recession. However, the continued decline has more likely been due to falling US natural gas prices caused by hydraulic fracturing (fracking), which has resulted in the substitution of natural gas for coal in electricity generation

Here’s a figure mentioned by Shapiro from the Environmental Protection Agency on US carbon emissions. Overall, the figure show how US carbon emissions peaked back around 2008 and have declined since. The figure also shows that although public discussions of carbon emissions often focus on transportation (red sector) and electricity generation (blue sector), these two sectors account for only about half of US carbon emissions.

There are a number of possible reasons for the decline in US pollution that have nothing in particular to do with environmental rules. For example, the long-term shift in the US economy away from agriculture and manufacturing and toward services is, overall, less polluting. It may be that some US imports are creating pollution abroad, but reducing US pollution. It may be that overall increases in productivity–fewer inputs needed for the output–would have reduced pollution in some sectors regardless of the pollution rules. However, as Shapiro points out, if one looks at specific emitters of pollution like the US manufacturing sector, or passenger cars, “the evidence appears to support the hypothesis that environmental policy has been the dominant driver of the decrease in pollution.” I would add that construction of sewage treatment plants and rules about wastewater treatment have had important effects in cleaning up surface water, as well.

Have the reductions in pollution been worth it? Economists have sought to itemize and monetize the costs and benefits of environmental rules. The studies can be controversial, as one might expect. In particular, the costs of following an environmental rule can be relatively clear-cut, but the benefits may involve issues like putting a value on human health, or estimating changes in real estate prices (say, whether houses next to a cleaned-up river are worth more as a result), or values of improved recreation activities, or even trying to put a value on people’s preference for the environment to be cleaner.

That said, Shapiro summarizes a recent study in this way: “[O]ne recent review finds that for all [federal] regulations analyzed between 1992 and 2017, the ratio of total estimated benefits to total estimated costs was 12.4 for air pollution, 4.8 for drinking water, 3.0 for greenhouse gases, and 0.8 for surface water policy.” The area with the weakest support involves surface water policy. The issue here is that while improved drinking water has benefits for human health, the benefits of cleaner rivers and lakes are typically based on estimates of changes in real estate values and recreation.

These are averages over a variety of rules, and so it remains likely that some rules have much larger payoffs than others. But in evaluating an area of regulations as a whole, when benefits are a multiple of costs, it’s a good sign. Indeed, it suggests that tightening a number of these environmental rules–especially on air quality–would bring additional benefits.

Technology and Job Categories in Decline

There’s a widespread concern that the spread of new artificial intelligence technologies might cause substantial job loss in certain areas. For example, why use a travel agent if you can book the reservations online? Why use a financial planner if you can use software? Will software diminish the number of people hired as tax preparers? Will smart robots diminish the number of people who have jobs moving stock in warehouses? What about fewer retail clerks being needed because of scanners? Why have a radiologist look at every single X-ray, if a software scan proves to be a much faster and more accurate way to do the first look?

So what’s the evidence? Are the jobs more severe in areas seem especially susceptible to new technologies? Michael Handel at the Bureau of Labor Statistics puts together data from 1999 up to pre-pandemic years, and then looks at projections up through 2029, on the number of people employed in various job categories, in “Growth trends for selected occupations considered at risk from automation” (Monthly Labor Review, July 2022). He writes:

Recent advances in robotics and artificial intelligence (AI) have attracted even more interest than usual, and the breadth and speed of these advances have raised the possibility of widespread job displacement in the near future. Many observers consider these new technologies fundamentally different from previous waves of computing technology. New computing capacities—in areas such as image recognition, robotic manipulation, text processing, natural-language processing, and pattern recognition, and, more generally, the ability to learn and improve rapidly in relatively autonomous ways—represent a break from the hand-coded, rules-based programs of the past. In this view, newer robots and AI represent a clear departure from previous waves of computing, one that accelerates the pace of technological change and job displacement.

Handel creates a list of jobs that are commonly thought to already be susceptible to changes in artificial intelligence, or may soon be susceptible. Here’s the list of the jobs, and how the number of people employed in such jobs has been changing.

In a number of cases, starting with financial advisers and interpreters near the top of the list, the number of workers in these categories seems to be rising, rather than falling. Apparently, in a number of cases the new technologies lead to changes in the previous ways of carrying out a given job, but they do so in a way that complement and augment the ability of workers in that category to do their jobs–and thus lead to growth in the number of jobs, rather than decline. There are also counterexamples, like job declines in the categories of travel agents, computer programmers, telemarketers, and welders. In these areas, new technologies seem to be substituting for workers, rather than complementing them.

There has been a fear for a couple of centuries now that technology destroys jobs. It would be more accurate to say that technology alters the mix of jobs. Given that most workers build up expertise in a certain job and then may find it hard or costly to change to a different job, the shifts in technology will benefit some and hurt others. Moreover, predicting in advance when technology will end up augmenting jobs (apparently, financial advisers and interpreters) and when it will end up substituting for jobs (apparently, computer programmers and travel agents) is complex and often not obvious in advance.

Handel also sorts out the 30 job categories that have experienced the largest losses in absolute numbers from 1999-2018. Many of these categories are not about changes in newfangled artificial intelligence technologies, but more basic changes from the previous wave of computerization and automation, including jobs in data entry, word processing, switchboard and computer operators, and file clerks. However, it’s worth remembering that as these job categories were being staggered by losses, the overall number of jobs in the US climbed by 17%.

Parents of Economics PhDs: Their Educational Background

It’s not a big shock that those who get a PhD tend to have parents who also had a higher level of education. If your parent have some knowledge about how to navigate the tangled forests of academia, along with the financial resources to help you along, you are more likely to get that advanced degree. But what is surprising, at least to me, is that this connection between parental education and getting a PhD is stronger for economics than for mostother fields.

Robert Schultz and Anna Stansbury provide the facts in “Socioeconomic Diversity
of Economics PhDs
” (March 2022, Peterson Institute for International Economics, WP-22-4). This figure shows some of the background. The first panel shows, in each field, what share of those with PhDs have parents who did not complete a college degree. Economics is the lowest. The second panel shows what share of those with PhDs in a given field have parents who completed a college degree, but not an additional degree. Economics is near the top. The third panel shows what share of those with PhDs in a given field have parents who completed a graduate degree in some field (not necessarily the same one).

One possible concern here is that many PhD programs at American universities have a large share of international students. Perhaps the patterns would be different if we focused only on US-born PhDs? Not really. Here are the same figures, this time with only US-born PhDs. As you can see, US-born economics PhDs are least likely to have parents without a BA degree and most likely to have parents with an advanced degree. It’s true that economics PhD’s don’t look different in the middle graph of those with only a BA degree, but it’s also true taht the differences in that middle graph across different PhD fields are relatively small.

This distinctive pattern of economics PhDs seems to have emerged in the 1980s and 1990s, and remained that way since then. This graph shows that the connection between whether a parent has an advanced degree and an individual’s likelihood of getting a PhD. The overall pattern here is interesting: across many fields, those who get PhDs are more likely to have parents with advanced degrees. For many other fields, this pattern has flattened out in the last couple of decades: in economics, not so much. There is a broader social issue here, which is that higher education has become a way for the well-off to give their children a socioeconomic boost as well–an intergenerational social mechanism that is probably much more important for most families than the size of any purely financial inheritance. But my focus here is on the pattern for economics PhDs, where this overall pattern is strongest.

Why does economics stand out from other subject areas in these comparisons? The answer isn’t obvious. One possibility is that the gap between a BA degree and PhD study is large in most fields, but maybe largest in economics. In most economics programs, it’s possible to get high grades in all your required classes and your senior project, but if you have not taken an additional heavy dose of math and statistics beyond those requirements, or if your undergraduate department hasn’t prepared you for graduate school expectations in other ways, you are less likely to be admitted and to succeed in an economics PhD program. Perhaps parents with graduate degrees are more aware of these issues, and so their children who go on to enter an economics PhD program are better prepared.

Another theory is that there is something in the way economics is commonly presented or taught at the undergraduate level which makes it less appealing to students whose parents do not have graduate degrees.

A related insight from Schultz and Stansbury is to look at the share of economics PhDs who graduated from what they call the “Ivy Plus” category–that is, the eight Ivy League schools plus Stanford, MIT, the University of Chicago, and Duke. The focus here is on US-born PhD recipients from all fields. The top panel shows that among US-born PhDs in all field, those in economics are least likely to have received their undergraduate BA from a public institution; the bottom panel shows that among US-born PhDs in all field, those in economics are much more likely to have received their undergraduate BA from one of the “Ivy Plus” institutions. (Moreover, if one included some of the more selective public universities like University of California-Berkeley and Michigan in the “Ivy Plus” category, these gaps would appear wider.)

I won’t try to draw big-picture conclusions here about “what it all means about economics.” But the picture that emerges is that PhD programs in economics seem notably less open to the range of parental backgrounds than other fields. Moreover, that lack of openness is being enforced and replicated by admissions committees in economics PhD programs.