Is Trade Still a Viable Path to Development? World Development Report 2020

Many of the world\’s development success stories in recent decades followed a broadly similar pattern. The countries became more involved with the world economy, often by exporting manufactured goods produced by low-wage workers. The rise in exports brought economic growth and income to their economics, but perhaps just as important, it also helped to foster a range of managerial, financial and technological skills. In this way, exporting was a fundamental step on the path to economic development.

The World Development Report 2020, subtitled \”Trading for Development in the Age of Global Value Chains,\” asks whether that step is still available for countries trying to develop their economies.

International trade expanded rapidly after 1990, powered by the rise of global value chains (GVCs). This expansion enabled an unprecedented convergence: poor countries grew faster and began to catch up with richer countries. Poverty fell sharply. These gains were driven by the fragmentation of production across countries and the growth of connections between firms. Parts and components began crisscrossing the globe as firms looked for efficiencies wherever they could find them. Productivity and incomes rose in countries that became integral to GVCs—Bangladesh, China, and Vietnam, among others. The steepest declines in poverty occurred in precisely those countries.

Today, however, it can no longer be taken for granted that trade will remain a force for prosperity. Since the global financial crisis of 2008, the growth of trade has been sluggish, and the expansion of GVCs has slowed. The last decade has seen nothing like the transformative events of the 1990s—the integration of China and Eastern Europe into the global economy and major trade agreements such as the Uruguay Round and the North American Free Trade Agreement (NAFTA).

At the same time, two potentially serious threats have emerged to the successful model of labor-intensive, trade-led growth. First, the arrival of labor-saving technologies such as automation and 3D printing could draw production closer to the consumer and reduce the demand for labor at home and abroad. Second, trade conflict among large countries could lead to a retrenchment or a segmentation
of GVCs.

What does all this mean for developing countries seeking to link to GVCs, acquire new technologies,and grow? Is there still a path to development through GVCs?

World Bank reports often have a certain kind of can-do spirit. When you ask \”is there still a path to development?\” the answer is pretty much always an appropriate qualified \”yes.\” Thus, a substantial amount of the report offers useful evidence that developing countries (or regions of countries) which have become part of global value chains have indeed experienced substantial benefits. Moreover, the report argues that development countries which pursue domestic economic reforms and open trade, along with social and environmental protections,  can still benefit. There is the expected call for more and better-designed international trade agreements.Here, I want to focus on the theme  \”Technological Change\” and trade as discussed in Chapter 6 of the report.

Information and communications technology is making it much easier to coordinate production chains and suppliers across countries.

High-speed Internet enables firms in developing countries to link to GVCs [global value chains]. The introduction of fast Internet in Africa and China has spurred employment and export growth, as recent studies of the economic effects of the rollout have shown. In Africa, the gradual arrival of submarine Internet cables led to faster job growth (including for low-skilled workers) in locations that benefited from better access to fast Internet relative to those that did not, with little or no job displacement across space. Increased firm entry, productivity, and exporting are among the drivers of the higher net job creation in these locations. Similarly, in China provinces experiencing an increase in the number of Internet users per capita also witnessed faster export growth, with more firms competing in international markets and a higher share of provincial output sold abroad. These examples attest to the potential of ICTs [information and communication technologies] to help countries become part of international supply chains. They also show that the uneven provision of ICT infrastructure can aggravate spatial inequalities …

In many developing countries, the costs of dealing with customs as goods cross the border and are transported within countries is high. Digital technologies can help here, too.

Digital technologies can improve customs performance by automating document processing and making it possible to create a single window for streamlining the administrative procedures for international trade transactions. In Costa Rica, a one-stop online customs system increased both exports and imports. Similarly, in Colombia computerizing import procedures increased imports, reduced corruption cases, bolstered tariff revenues, and accelerated the growth of firms most exposed to the new procedures. …
Some robotics and artificial intelligence applications might further reduce logistics costs, the time to transport, and the uncertainty of delivery times. At ports, autonomous vehicles might unload, stack, and reload containers faster and with fewer errors. Blockchain shipping solutions may lower transit times and speed up payments. The Internet of Things has the potential to increase the efficiency of delivery services by tracking shipments in real time, while improved and expanded navigation systems may help route trucks based on current road and traffic conditions. Although the empirical evidence on these impacts is limited, it is estimated that new logistics technologies could reduce shipping and customs processing times by 16 to 28 percent … 

In addition, developing countries face large intranational trade costs, which determine the extent to which producers and consumers in remote locations are affected by changes in trade policy and international prices. For example, the effect of distance on trade costs within Ethiopia or Nigeria is four to five times larger than in the United States. Intermediaries capture most of the surplus from falling world prices, especially in more distant locations. Therefore, consumers in remote locations see only a small part of the gains from falling international trade barriers. Despite recent advances in the provision of ICT infrastructure, the scope for further expanding access to high-speed Internet in developing countries remains huge. …. For many goods traded in GVCs, a day’s delay is equal to imposing a tariff in excess of 1 percent.

Computerized translation between languages reduces a barrier to trade, too.

Machine learning also reduces the linguistic barriers to trade and GVC participation. One application of machine learning—machine translation—has improved in recent years. For example, the best score at the Workshop on Machine Translation for English to German rose from 15.7 to 28.3, according to a widely used comparison metric, the BLEU score. The introduction of machine translation from English to Spanish by eBay has significantly boosted international trade between the United States and Latin America on this platform, increasing exports by 17.5 percent. These effects reflect a reduction in translation-related search costs and show that artificial intelligence has already begun to boost trade in North and South America.

There have been concerns that in a world economy full of automation, robots, 3D printing, and other technology, that developing countries may face a risk of \”premature deindustrialization\”–that is, they won\’t be able to use their natural comparative advantage of cheap labor to enter global value chains in manufactured goods. But at least so far, robots and automation seem to be boosting international trade between high-income countries and emerging markets, rather than leading to \”reshoring\” of previous imported products and a reduction in trade.

Despite the concerns about the effects of automatization, the evidence that reshoring will result is very limited. … Thus far, the rising adoption of industrial robots and 3D printing seems to have promoted North–South trade. Greater robot intensity in production has led to more imports sourced from lower-income countries in the same broad industry—and to an even stronger increase in gross exports (which embody imported inputs) to those countries. The surge in imports from the South has been concentrated in intermediate goods such as parts and components. The positive impact of automation on imports, particularly on imports of intermediates, attests to the importance of examining the effects of robotization on trade through a GVC framework. More-traditional trade models would predict the increase in exports by the North but fail to foresee the surge in imports from the South in the same industry. Rather than reducing North–South trade, robotization seems to have been boosting it, although it is uncertain whether this trend is likely to continue.

The rise of information technology, and its effects on reducing transportation costs have caused a wave of not-previously-traded products to become part of international trade. In many cases, these are intermediate and unfinished goods where it now makes economic sense to ship them across national borders. In other cases, they are brand-new goods–even services or digital goods.

Since the 1990s, many new types of products have entered global trade, primarily intermediate goods, further demonstrating the increasing fragmentation of production and the emergence of entirely new  products. Indeed, the trade in new products has grown dramatically. In 2017, 65 percent of trade was in categories that either did not exist in 1992 or were modified to better reflect changes in trade. Trade in intermediate goods (parts and components and semifinished goods) expanded, and entirely new products entered global trade. For example, trade in IT products tripled over the past two decades, as trade in digitizable goods such as CDs, books, and newspapers steadfastly declined from 2.7 percent of the total goods trade in 2000 to 0.8 percent in 2018. Technological developments are likely to continue to produce product churning. Because of technological progress, more goods and services are likely to become tradable over time. For example, platforms such as Upwork and Mechanical Turk make it easier for businesses to outsource tasks to workers who can perform them virtually. And new goods and services are likely to be developed, including ones not even imaginable today, thereby boosting the incentives to trade.

One potential source of productivity for developing countries is that international trade raises the rewards for organizing into larger firms and increasing the scale of production:

In part because of high trade costs, firms in low-income countries tend to operate on a small scale and are less likely to export or import. A typical modal manufacturing firm in the United States has workers, and larger firms tend to be more productive and pay higher wages and are more likely to export and import. By contrast, a modal firm in most developing countries has one worker, the owner. Among firms that do hire additional workers, most hire fewer than 10. In India, Indonesia, and Nigeria, firms with fewer than 10 workers account for more than 99 percent of the total.

But behind all these legitimate reasons for overall enthusiasm, some potential problems lurk. It\’s interesting and useful to discuss the ways that technology is likely to increase global trade, and how it can still serve as a pathway to higher growth for developing countries. However, technology typically brings a mixture of winners and losers, both within and across countries. Some of the international trade success stories from developing economies will happen as potential competitors are crowded out. For a hint of these negative possibilities, ere are a couple of paragraphs from near the end of the chapter. I have inserted boldface type for the various qualifications, the \”maybes\” and the \”likelys\” and the tradeoffs. As is so often true when thinking about the effects of new technology, reading the passage without emphasizing these qualifications is a soothing and positive experience; reading it while emphasizing these qualifications is worrisome. Both reactions are valid!

Although predicting the future is a treacherous exercise, new technologies will likely reduce trade costs and make it easier to participate in global markets. Such outcomes may offer developing countries new opportunities to link into GVCs. However, the attendant intensification of competition may make it more challenging for countries to succeed. Platform firms, for example, are making it easier to connect, but their reputation mechanisms for verifying supplier quality tend to foster concentration and make it harder for entrants to grow. They are creating new challenges for regulators both because they wield market power and because their interactions with agents in different parts of the value chain may create potential conflicts of interest and enhance the scope for anticompetitive conduct.

Automation anxiety is not warranted for all developing countries. Although some countries are likely to lose manufacturing employment because of greater competition in output markets, countries that are part of GVCs and supplying inputs to other countries that are automating may see an increase in the demand for their goods, and consumers everywhere will enjoy lower prices. The primary challenge arising from new production technologies is to ensure that the benefits are shared and that losers are compensated both across and within countries. Among the countries adopting these technologies, labor market disruptions are likely to be significant, skill premiums are likely to rise, and labor’s share of income may decline further.

Some Economics of the Clean Water Act

Here\’s an uncomfortable set of facts about federal clean water policy in the United States: 1) People care about it a lot; 2) Over the years, total spending on clean water has been high; 3) Water quality has improved; and 4) The estimated benefits of clean water regulation in the US seem relatively low and in many cases even negative. My discussion here will draw on an essay by David A. Keiser and Joseph S. Shapiro, \”US Water Pollution Regulation over the Past Half Century: Burning Waters to Crystal Springs?\” Journal of Economic Perspectives, Fall 2019, 33:4, pp.  51-75.

Keiser and Shapiro offer some evidence from Gallup polls that clean water is traditionally near the top of environmental concerns.

The amount spent on clean water legislation has been substantial. They write:

Over the period 1970 to 2014, we calculate total spending of $2.83 trillion to clean up surface water pollution, $1.99 trillion to provide clean drinking water, and $2.11 trillion to clean up air pollution (all converted to 2017 dollars). Total spending to clean up water pollution exceeded total spending to clean up air pollution by 70 to 130 percent. … Since 1970, the United States has spent approximately $4.8 trillion (in 2017 dollars) to clean up surface water pollution and provide clean drinking water, or over $400 annually for every American. In the average year, this accounts for 0.8 percent of GDP, making clean water arguably the most expensive environmental investment in US history. For comparison, the average American spends $60 annually on bottled water … 

The quality of water has improved. For example, the share of wastewater and industrial discharge being treated has risen.One common measure is whether the water is \”fishable,\” and the share of water \”not fishable\” has been declining.

But here\’s a kicker: the benefit-cost ratios for cleaning up water, especially surface water, don\’t look as good as the ratios for cleaning up air pollution. The first column looks at benefit-cost analyses for cleaning up surface water, typically under the Clean Water Act, which supported sewage treatment plants and regulates facilities discharging waste from a \”fixed source,\” like a pipe, into navigable waters. The second column looks at benefit-cost ratios for rules about cleaning up drinking water, typically under the Safe Drinking Water Act, which sets and enforces drinking water standard and also has a say in cleaning up groundwater.

The perhaps startling pattern is that the benefit-cost ratios for surface water rules are typically less than one, meaning that benefits are below costs. For drinking water, the benefit-cost ratios on average exceed one, but it\’s still true that 20% of the rules have a benefit-cost ratio below one. Rules about air pollution have much better benefit-cost ratios.

So what\’s going on here? Here are some thoughts:

1) The rules listed here are often about additions to the earlier rules. Thus, it\’s possible that earlier rules about protecting surface water and drinking water had better benefit-cost ratios, but now that some of the worse problems have been addressed, the benefit-cost ratios for additional rules are lower.

2) For surface water rules, in particular, the \”benefits\” in these studies rarely involve human health. Reducing illness and saving lives in humans is where the big benefits are. If the benefits are measured as improved recreational opportunities on certain lakes and rivers, the numbers are going to be much lower. In addition, it seems that a number of the studies of benefits of cleaner surface water don\’t take into account improvements in property values or work conditions from being close to cleaner water.  Benefits like biodiversity from cleaner water may also be underestimated. As Keiser and Shapiro write: \”Most existing benefits of surface water quality are believed to come from recreation, but available data on recreation are often geographically limited (for example, one county, state, or lake) and often come from a single cross section. Hence, our subjective perception is that underestimation of benefits is more likely a concern for surface water quality regulation than for other regulations.\”

3) A wide range of evidence has shown that market-based environment regulation–like using cap-and-trade arrangements or pollution taxes–can be a much cheaper way to achieve a given amount of environmental cleanup. However, it\’s often harder to figure out how to use a system of, say, tradeable pollution permits to clean up surface water. As a result, the costs in these benefit-cost calculations may be higher than necessary. However, use of these more flexible tools in surface water clean-up is rising: for example, there is a Chesapeake Bay Watershed Nutrient Credit Exchange and a
Minnesota River Basin Trading market. For a discussion from a few years ago about these issues, a good starting point is Karen Fisher-Vanden and Sheila Olmstead. 2013. \”Moving Pollution Trading from Air to Water: Potential, Problems, and Prognosis.\” Journal of Economic Perspectives, 27 (1): 147-72.

4) Even taking the possibilities that reducing water pollution has greater benefits than currently estimated,  and potentially might have lower costs, a general uneasiness that the benefit-cost ratios are lower than other forms of environmental protection remains.

There are some big issues about water pollution on the horizon. For example the original clean water laws back in the early 1970s pretty much skipped over agriculture, but in many parts of the country agricultural runoff is the single biggest surface water pollution problem.

There\’s also a big dispute going on over what is meant in the Clean Water Act by the phrase \”Waters of the United States.\” It\’s clear that this includes rivers and lakes. But what about wetlands, headwater areas that drain into rivers and lakes, or streams that come and go seasonally? As the authors write:

Another challenge involves the language of the Clean Water Act protecting “Waters of the United States,” which has led to legal debates over how this term applies to roughly half of US waters, primarily composed of wetlands, headwaters, and intermittent streams. Two Supreme Court decisions held that the Clean Water Act does not protect most of these waters (Rapanos v. United States, 547 US 715 [2006]; Solid Waste Agency of Northern Cook County (SWANCC) v. US Army Corps of Engineers, 531 US 159 [2001]). In 2015, the Obama administration issued the Waters of the United States Rule, which sought to reinstate these protections. However, in 2017, President Trump issued an executive order to rescind or revise this rule. The net benefits of these regulations have also become controversial …

Keiser and Shapiro also point out that there is a LOT more economics research about air quality than water quality, perhaps in part because the Environmental Protection Agency collects and makes available copious data on air pollution, while data on water pollution is collected more sporadically and divided up in many places. They point out a number of ways in which data related to water pollution is becoming more complete and available. But matching up the very local steps to reduce water pollution with the very local effects of water pollution, and then tracing water pollution through the natural hydrogeography, remains in many ways a work in progress.

The Declining Share of Veterans Among Prime-Age Men: The Centennial of Armistice Day

The armistice marking the end of World War I was signed on November 11, 1918. A year later–and 100 years ago today–the first Armistice Day celebrations were held at Buckingham Palace. The US Congress passed a resolution commemorating Armistice Day in 1926, and it became a national holiday in 1938. In 1954, after World War II, its name was changed to Veteran\’s Day in the United States.

But as you may have noticed when attending an event where veterans are encouraged to stand and be recognized for their service, the share of \”prime-age\” men (a term economists use to describe those in main working years from ages 25-54) who served as veterans has been in sharp decline.

Coile, Courtney C. Coile and Mark G. Duggan raise this issue in passing in a Spring 2019 essay in the Journal of Economic Perspectives (\”When Labor\’s Lost: Health, Family Life, Incarceration, and Education in a Time of Declining Economic Opportunity for Low-Skilled Men,\” 33:2, 191-210). They write:

[W]e call attention to perhaps the most significant change among prime-age men in recent decades. In 1980, fully 45 percent of prime-age men reported in the Bureau of Labor Statistics’ monthly Current Population Survey said that they had previously served in the military. This number steadily declined during the next 36 years and stood at just 10 percent by 2016 in this same survey.

Of course, a major reason for this shift is because of the end of the military draft in 1973, a change in which the arguments and projections of economists about how an all-volunteer military force could function played a substantial role (for a background essay, see John T. Warner and Beth J. Asch, \”The Record and Prospects of the All-Volunteer Military in the United States,\” Journal of Economic Perspectives, Spring 2001, 15:2,  pp. 169-192).

What do we know about the effects of this dramatic social change? Coile and Duggan write:

Much of the economics literature has examined the effect of military service by using plausibly exogenous variation in the likelihood of service driven by one’s draft lottery number (Angrist 1990). This research has tended to find quite modest long-term effects of military service on employment, earnings, and health status (for example, Angrist, Chen, and Frandsen 2010; Angrist, Chen, and Song 2011). However, these studies are unable to capture the peer effects or general equilibrium effects of military service. Recent research has suggested substantial gains to cognitive and noncognitive skills stemming from military service (Spiro, Stetterson, and Aldwin 2015) and associated benefits such as the GI bill. Overall, we see a strong need for further work to investigate how changing economic opportunities, declines in military service, and other factors are contributing to or cushioning the problems of low-skilled prime-age men.

This shift away from shared military experience is a large and probably understudied social shift. Many of those who served in the armed forces, and survived, have lasting personal ties both to those they knew and to others who shared the experience.

My suspicion is that the effects of military service in later life are probably quite different between the days of the draft and the all-volunteer force. For example, during the draft pay could be relatively low and there was not much reason for the armed forces to invest in the human capital of new soldiers, most of whom would be out of military service in a few years. With the all-volunteer force, pay had to be somewhat higher and the armed forces had to focus on training and incentives for retention. When big US companies need a new CEO, they can do a job search outside their own firm. But when the armed forces needs a new general or admiral, they have to promote from within.

There are occasional proposals for a national service requirement, proposals which in their rhetoric sometimes piggyback on the strong positive feelings many of us have about veterans. But I\’m old enough to have grown up in the aftermath of the Vietnam-era military draft, and it would be a dramatic understatement to say that the draft was unpopular.  My own cynical observation about national service proposals is that it\’s a case of middle-aged and elderly people voting on which young adults are eligible for exceptions and loopholes, and how the others will be required to spend a couple of years of their lives in low-cost labor. In an odd way, I\’d be marginally more sympathetic to a national service proposal requiring, say, that everyone between the ages of 30 and 50 needs to take two years out of their life for full-time, low-wage labor. After all, wouldn\’t these ages be potentially an even more productive time to \”foster unity\” and \”build bridges\” and \”bring people together,\” and all the other claims made for a national service requirement? Or perhaps members of Congress could require that they personally each spend one month out of every two years in a full-time, away-from-home national service requirement.

US Dependence on Imported Minerals

This figure shows the US reliance on imports for various minerals, from the US Geological Survey. I\’m fully aware that minerals are not equally distributed around the world, and I\’m a pro-trade guy, so I won\’t lose sleep tonight about these numbers. But during waking hours, I will wonder about whether the supplies from other countries are reasonably steady and reliable. I\’ll also wonder about whether global pollution is worse because US firms are importing minerals from countries with substantially lower environmental standards than the United States.
2018 US Net Import Reliance

Minimum Wages and Overtime Rules

Perhaps the best-known provision of the Fair Labor Standards Act (FLSA) of 1938 is that it set a federal minimum wage for the first time. In addition, this is the law that established the overtime rle that if you are a \”nonexempt\” work–which basically means a worker paid by the hour rather than on a salary–then if you work more than 40 hours/week you must be paid time-and-a-half for the additional hours. 

Charles C. Brown and Daniel S. Hamermesh take a look at the evidence on both provisions in \”Wages and Hours Laws: What Do We Know? What Can Be Done?\” (Russell Sage Foundation Journal of the Social Sciences, December 2019, 5:5, pp.  68-87). They write:

Although wages and hours are regulated under the same law, policy developments and research on the law’s impacts could not be more different between the two areas. The federal minimum wage has been raised numerous times; and many subfederal jurisdictions impose their own wage minima that, where they exceed the federal minimum, supersede it. Perhaps because of this variation, a huge literature examining the effects of minimum wages on the U.S. labor market has arisen and has continued to burgeon. A fair conclusion is that American labor economists have spilled more ink per federal budgetary dollar on this topic than on any other labor-related policy. The opposite is the case for regulating hours. The essential parameters of hours regulation have not changed since passage of the act; and perhaps because of this, the dearth of research on the economic impact of hours regulation in the United States, especially recently, is remarkable.

(In the shade of these parentheses, I\’ll also mention this issue of the the RSF journal, edited by Erica L. Groshen and Harry J. Holzer, is especially rich in content, including 10 articles on the general theme of \”Improving Employment and Earnings in Twenty-First Century Labor Markets.\” I\’ll list the Table of Contents for the issue, with links to the articles, at the bottom of this post.)

Minimum Wages

The US minimum wage situation has changed dramatically in the last decade or so in a particular way: a much larger share of workers live in states with a minimum wage above the federal level. Brown and Hamermesh write:

Over the past thirty years, however, states’ decisions to increase their minimum wages have become increasingly important given that the federal minimum has changed less frequently. For example, in 2010 (after the 2007 federal increases had become fully effective) only one-third of the workforce was in states with state minima that exceeded the federal $7.25. By 2016, with the federal minimum still at $7.25, that fraction had risen to nearly two-thirds. As of 2018, twenty-nine states … had minimum wages above $7.25. States that have raised their minimum wages above the federal minimum have tended to be high-wage states, and the result has been a minimum wage much more closely (though still imperfectly) aligned with local wages.

 Brown and Hamermesh focus on the studies that try to estimate the effects of a minimum wage by looking at these differences in minimum wages that have arisen across states (leaving the issues involved in studying city-level minimum wages for another day). Here are some of the points they make: 

There are basically three ways to take advantage of the state-level changes and variations in minimum wage: comparisons between states; comparisons between border counties of states; and comparisons with states and \”synthetic\” control groups, which basically means finding a combination of other areas that had economic patterns to a certain state before the minimum wage was changed.

When doing these comparisons, a researcher will want to adjust for other factors that might affect state economies: for example, a natural disaster that hit one state but not another, or a change in the price of oil would affect an oil-producing state. A researcher can allow for each state or border-county to be following its own time trend, or for the effect of the minimum wage on employment to be different in every state. Is the relationship between a changing minimum wage and employment a straight line or a curved line–and if it\’s a curved line, how curved is it? The more variables like this you include, the smaller the effect of a minimum wages on employment is likely to be. There is considerable disagreement and controversy over what variables should be included.

It\’s been typical in many of these studies to focus on either teenagers or restaurant workers, because they are both groups that are presumably affected by the minimum wage.

A common finding is that a rise in the minimum wage of 10% raises the wages of teenagers as a group or restaurant workers as a group by about 2%–presumably because some teenagers or restaurant workers were already earning more than the minimum wage and thus weren\’t affected.

Estimates of the effect of a raising the minimum wage on either employment of teenagers or restaurant workers are all over the place, depending on exactly how the estimation process is done, usually \”small\”–which in this case means \”small enough that the earnings gains caused by a minimum wage increase are only partially offset by employment losses.\”

Of course, showing that past minimum wage increases had small effects in reducing employment doesn\’t prove that additional minimum wage increases would also have small effects. The usual belief of economists is that the effects of a rising minimum wage on employment would be small up to some point, but then start getting larger. That point is likely to vary according across states–which is why it makes some sense to have a different minimum wage across states.

At least one recent study has tried to focus on workers age 16-25 who have not completed high school–rather than teenagers in general. There some evidence a higher minimum wage might have a bigger effect on low-education workers in particular, rather than looking at teenagers or restaurant workers.

It\’s plausible that the effects of a higher minimum wage on employment might be larger in the long-term. For example, perhaps a firm doesn\’t fire anyone when the minimum wage rises, but instead just slow down on hiring. Or perhaps a minimum wage causes certain kinds of firms to be more likely to exit the market over time, or less likely to enter, or more likely to invest in labor-saving technology. Some studies have found support for these effects; others have not.

For some complementary discussion of the evidence on raising minimum wages in previous posts, see:

Overtime Rules

In contrast to minimum wage laws, overtime rules haven\’t changed much over time. Brown and Hamermesh write: \”In the eighty years since the FLSA was enacted, the specification of its crucial parameters regulating hours—a penalty rate of 50 percent extra wages on hours beyond the standard weekly hours (HS) of forty—has not changed.\” Maybe the main way it has come up in recent policy disputes is when laws are proposed that employers should be able to give \”comp time\” for overtime work, meaning extra vacation time, instead of paying higher wages. 

But a big change in the overtime rules has been happening in a subtle way. Back in the mid-1970s, the rule was that a salaried worker had to be paid at least $455/week to be exempt from the requirement to get paid time-and-a-half for overtime. But that $455/week hasn\’t been changed since then, even though it\’s value has been eaten away by inflation. Brown and Hamermesh calculate that $455/week was about double the median weekly earnings in the US economy back in the mid-1970s; now, it\’s about 50% of median weekly earnings. 
To put this another way, it used to be that you had to be earning a salary of double the typical weekly earnings before you were exempt from overtime rules. Now, you can be paid a much lower salary, half of typical weekly earnings, and you are still exempt from the overtime rules. The rules requiring overtime pay thus have gradually come to apply to many fewer workers over time. The Obama administration tried to raise the limit to $913/week by using an administrative rule, but the courts held (reasonably enough, in my view) that this kind of decision needed to be made by Congress passing a law. Apparently the Trump administration has now proposed raising the limit to $679/week.
What would happen if the rules were changed so that dramatically more workers needed to be paid overtime for working more than 40 hours/week? Presumably, some of these workers would get paid overtime, but in addition, employers would try to reduce the number of workers who ended up above that weekly limit. Brown and Hamermesh run through various calculations and look at some international evidence. They write: \”We can conclude that increasing the exempt limit would have raised some salaried workers’ earnings and reduced their weekly hours. One exercise suggested that 12.5 million workers would have been affected …\” 
The effects of changing the rules so that more workers are eligible for overtime pay aren\’t enormous. Still, for workers who are being paid salaries below the median weekly wage, and thus aren\’t eligible for overtime, it could be a meaningful gain. They write:

If we are interested in spreading work among more people and removing the United States from its current position as the international champion among wealthy countries in annual work time per worker, minor tinkering with current overtime laws will do little. We might borrow from some of the panoply of European mandates that alter the amount and timing of work hours. Among these are penalties for work on weekends, evenings, and nights and limits on annual overtime hours, while lengthening the accounting period for overtime beyond the current single week. If our goal is to spread work and make for a more relaxed society, these changes will help but their effects will also be small.

____________

The Roundup Case: Problems with Implementing Science-Based Policy

Imagine, just for the sake of argument, that you are open-minded about the question of whether the weed-killer Roundup (long produced by Monsanto, which was recently acquired by Bayer AG) causes cancer. You want to make a decision based on scientific evidence. However, you aren\’t a scientist yourself, and you don\’t feel competent at trying to read scientific studies.

Geoffrey Kabat asks \”Who’s Afraid of Roundup?\” in the Fall 2019 issue of Issues in Science and Technology. More broadly, he uses the controversy over Roundup as a way to ask about the role of science in decision-making.

When it comes to Roundup and its active ingredient glyphosate, the Environmental Protection Agency has continually said that \”there are no risks to public health when glyphosate is used in accordance with its current label and that glyphosate is not a carcinogen.\” As Kabat points out:

The US Environmental Protection Agency’s recent assessment is only the latest in a succession of reports from national regulatory agencies, as well as international bodies, that support the safety of glyphosate. These include Health Canada, the European Food Safety Authority (EFSA), the European Chemicals Agency, Germany’s Federal Institute for Risk Assessment, and the Food and Agriculture Organization of the United Nations, as well as health and regulatory agencies of France, Australia, New Zealand, Japan, and Brazil.

But just when you find yourself deeply relieve that the experts have reached a consensus, you ind that one agency disagrees. In 2015, International Agency for Research on Cancer listed glyphosate is a “probable carcinogen.” There are lots of reasons to be dubious about the IARC decision, and to believe the consensus of all the other agencies around the world, and Kabat runs through quite a list. Here are a few of his points:

  • Unlike the other health-and-safety agencies, the IARC ignores the size of the dose. Thus, for example, when IARC evaluated 500 agents and chemicals while ignoring the size of the dose, it found that 499 of them were possible carcinogens. Other agencies take the dose into account.
  • The IARC evaluation looked only at certain parts of some studies of how glyphosate affected rodents. Reanalysis of the same studies found that the \”IARC Working Group that conducted the assessment selected a few positive results in one sex and used an inappropriate statistical test to declare some tumor increases significant.\”
  • There is a major study funded by the National Cancer Institute looking at 54,000 pesticide applicators in Iowa and North Carolina. \” Indeed, when the results for glyphosate and cancer incidence  … were finally published in the Journal of the National Cancer Institute, in 2018, the paper reported no significant increases …\” 
  • A key scientist for the IARC process both led the way in designating glyphosate as a substance to be studied and in writing the IARC report. Then two weeks after the report came out, this scientist \”signed a lucrative contract to act as a litigation consultant with a law firm—Lundy, Lundy, Soileau, and South—engaged in bringing lawsuits against Monsanto for Roundup exposure.\”

In a bigger picture sense, the actual science over Roundup and glyphosate becomes almost irrelevant  to the public disputes. The scientific question of whether glyphosate is a carcinogen is treated as identical to the question of whether one is anti-pesticide, anti-genetic modification, and anti-Big Agriculture.

The result is  what the head of the European Food Safety Authority called \”the Facebook age of science.\” As background, the European agencies are well-known for their willingness to invoke the \”precautionary principle\”–basically, if we aren\’t sure and it might cause a problem, we should prohibit it. In this spirit, a group of almost 100 scientists wrote to EFSA to complain about their decision allowing glyphosate. Here\’s how Bernhard Url, the head of EFSA, responded:

You have a scientific assessment, you put it on Facebook, and you count how many people ‘like’ it. For [EFSA], this is no way forward. We produce a scientific opinion, we stand for it, but we cannot take into account whether it will be liked or not. … People that have not contributed to the work, that have not seen the evidence most likely, that have not had the time to go into the detail, that are not in the process, have signed a letter of support [for a ban on glyphosate]. Sorry to say that, for me, with this you leave the domain of science, you enter into the domain of lobbying and campaigning. And this is not the way EFSA goes.

Roundup is of course just one product, but the issue of how science will be used in public policy is of course much broader. For example, if a lawsuit alleges that Roundup causes cancer, the truth of that accusation presumably matters. As Kabat points out, it \”should come as no surprise that the same factors that are at work here are at work in many other areas, whether electromagnetic fields, cell phone `radiation,\’ so-called endocrine disrupting chemicals, numerous aspects of diet, cosmetic talc, GMOs, vaccines, nuclear power, or climate change.\”

In my own contentious way, I find it especially interesting when people make strong appeals to a  scientific consensus in one area, but then dismiss it in other areas. For example, those who  believe that action should be taken to reduce greenhouse gas emissions sometimes accuse their opponents of denying \”the science.\” But on occasion, it then turns out that those who wrap themselves in the mantle of \”the science\” when it comes to climate change turn out to oppose vaccinations or Roundup. The idea of whether to build the Keystone XL oil pipeline across Canada and into the United States went through multiple environmental reviews during the Obama administration, each one finding it would not have a negative effect. For those protesting the pipeline, like for those writing group letters to the European regulators about glyphosate, the \”science\” was only acceptable if it supported their prior beliefs.

One of my favorite examples about the \”science\” and popular beliefs involves the irradiation of food. For a quick overview, Tara McHugh describes \”Realizing the Benefits of Food Irradiation\” in the September 2019 issue of Food Technology Magazine. As she notes, the Food and Drug Administration recently approved irradiation for fresh fruits and vegetables, and it had already been approved for a range of other food products. McHugh writes:

The global food irradiation market was valued at $200 million in 2017 and was projected by Coherent Market Insights to grow at a 4.9% combined annual growth rate from 2018 to 2026. This projects the market size to rise to $284 million by 2026. This high growth rate was envisioned due to increased consumer acceptance since the U.S. Food and Drug Administration (FDA) approved phytosanitary treatment of fresh fruits and vegetables by irradiation. The food irradiation market in Asia is also growing very rapidly owing to approval of government agencies in India and other countries. Presently over 40 countries have approved applications to irradiate over 40 different foods. More than half a million tons of food is irradiated around the globe each year. About a third of the spices and seasonings used in the United States are irradiated.

It would be interesting to see a Venn diagram showing how many of those who believe in \”the science\” when it comes to climate change also believe in \”the science\” when it comes to the safety of Roundup, vaccinations, or irradiating food. Or perhaps there is a human cognitive bias which is more prone to believe \”the science\” when it warns of danger, but less likely to believe \”the science\” when it tells us that something we believe to be dangerous (or something that we oppose on other grounds) is actually safe. 

Flexible vs. Deep: What are the Ties that Bind a Firm Together?

One of the classic questions in economics is about what determines what is inside or outside a company: that is, why do companies buy some inputs or higher some workers from outside their firm through market transactions, but hire other workers to produce certain inputs inside the firm? Economists will recognize this as the central question posed by Roland Coase (Nobel \’91) in his famous 1937 essay, \”The Nature of the Firm\” (Economica, November 1937, pp. 386-405). Coase points out that economic activity within firms is coordinated by conscious administrative action, while economic activity between firms is coordinated by supply and demand. In one passatge that always makes me smile, Coase writes (footnotes omitted):

As D. H. Robertson points out, we find \”islands of conscious power in this ocean of unconscious co-operation like lumps of butter coagulating in a pail of buttermilk.” But in view of the fact that it is usually argued that co-ordination will be done by the price mechanism, why is such organisation necessary? Why are there these “islands of conscious power”? Outside the firm, price movements direct production, which is co-ordinated through a series of exchange transactions on the market. Within a firm, these market transactions are eliminated and in place of the complicated market structure with exchange transactions is substituted the entrepreneur-co-ordinator, who directs production. It is clear that these are alternative methods of co-ordinating production.

The line between what activities are coordinate more effectively by administrative action inside a firm and what is coordinated more effectively by market actions between firms shifts over time, and across different types of firms. One current example is the number of firms that sell manufactured goods but are \”factoryless\”–that is, they don\’t own or manage the factory in which their goods are produced. Diane Coyle and David Nguyen offer some recent examples in \”\”No plant, no problem? Factoryless manufacturing and economic measurement\” (ESCoE Discussion Paper 2019-15 September 2019). In a short overview of that paper, they write:

Did you know that Mercedes does not actually produce its heavy-duty G-Class? To be fair, it does keep design, development and marketing of the SUV in-house, but the vehicle is entirely built in the factory of Magna Steyr, a contract manufacturer based in Graz, Austria. In the same plant one will also find entire production lines for the Jaguar I-Pace and E-Pace, as well as BMW’s Series 5.

Coyle and Nguyen are focused on the question of how to measure \”output\” of an economy if design, development, and marketing are in one place, but the physical production intimately inked to design development and marketing is in another. For a previous discussion of factoryless manufacturing, in the US economy, see \”Factoryless Goods Producing Firms\” (May 16, 2015).

In thinking about these shifting lines between  what is produced inside and outside a company, I was i intrigued by the Edward Tenner\’s discussion of \”a long-developing tension between two iconic corporate models: the flexible organization and the deep organization\” in \”The 737 MAX And the Perils of the Flexible Corporation\” (Milken Institute Review, Fourth Quarter 2019, pp. 36-49). Tenner describes the difference in this way:

Depth is not just a matter of corporate size or scale. It is an attitude of public responsibility. Executives of a deep organization may strive for the highest possible profits — but only in the context of a perceived essential role in the social order. The gospel of flexibility, rooted in business-school doctrines of the primacy of shareholder value … seeks to preserve freedom of short-term optimization and cost reduction. By contrast, the gospel of depth has been mainly a tacit one, based on the idea that a dominant organization has a distinct role in the social order. It seeks to serve multiple stakeholders, to provide safety and security to consumers even if it raises costs and to plan for its long-term future. Deep organizations have often subscribed to what has been called welfare capitalism, providing impressive health, educational and recreation services for employees, expecting exceptional loyalty and higher productivity in return. Many, though not all, deep organizations have government ties and semi-official roles. John D. Rockefeller’s Standard Oil was not a deep organization in this sense; AT&T before the breakup of Ma Bell was.

A deep organization has extraordinary in-house capabilities, managed administratively inside the firm. A flexible organization is more likely to add a mixture of shifts in locations or outside contractors where this seems to increase efficiency and to raise profits. 

Tenner uses Boeing as an example of a shift from a \”deep\” to a \”flexible\” organization. As an illustration from the days of a \”deep\” Boeing, he tells the story of the launch of the 747 back in 1969:

In 1968, construction of the first 747 from scratch, a plane that was radically different than any existing aircraft, was completed in only 29 months. Boeing had to build an entirely new factory (the world’s largest) to produce it. But the company already had the asset on hand that mattered most, a staff of some 50,000 experienced engineers, technicians and managers known within the company as “the Incredibles.” In contrast to the trial and error of many engineering projects, the 747 was completed with such remarkable precision that the head of the project, Joe Sutter, could predict exactly where on the runway the plane would take off — and test pilots lauded its handling from Day 1. 

A deep organization, like Boeing in the 1960s and 1970s, has not only a reservoir of professional skills, but also an esprit de corps that can resist policies that threaten the mission. At one point in the project, Boeing senior management wanted to cut 1,000 of the 4,500 engineers assigned to the 747. But at the risk of his own job, Sutter walked out of the meeting at which the proposal was made; Sutter prevailed.

But over time, Boeing shifted toward flexibility. It moved its headquarters from Seattle to Chicago. It opened a major production facility in South Carolina. There were strong arguments for these and other changes, but there were also tradeoffs in terms of in-house expertise and cohesiveness. And when it came time to design and build the 737 MAX, two of which crashed earlier this year. There are typically multiple reasons for plane crashes, and this is no exception, But some of the reasons that have been proposed in media reports include problems with manufacturing in the South Carolina facility \”including manufacturing debris left in finished aircraft,\” \”some key software of the Boeing 737 MAX had been developed by $9-an-hour programmers outsourced abroad,\” and \”yhe FAA’s delegation of some essential monitoring tasks to Boeing employees.\”

As Teller argues, we are in an age of flexibility gospel, and the advantages of flexibility in many contexts are very real. However, Teller is pushing back, gently, pointing out that deep organizations have their benefits, too.

For example, deep organizations often had in-house corporate research laboratories, which often proved remarkably fertile places for the interaction of cutting-edge science and real-world production issues.  And sometimes those corporate research labs produced extraordinarily important spinoff innovations like the transistor or the laser, or even the idea of the relational database.

Deep organizations sometimes provided quality that was so high it was sometimes redundant. In the case of AT&T during its days as a deep organization before it was broken up in 1984, Teller notes:

And, in the 1950s, handsets designed by the renowned Henry Dreyfuss and manufactured by its own Western Electric subsidiary, were rated to stand decades of punishing use. One result was the Bell System’s astonishing reliability rate of 99.999 percent call completion. In 2011, a senior Google executive acknowledged that the web had yet to achieve reliability even remotely as high.

Products that seem overengineered can look like a place for cost-cutting, but the lessons learned from producing in this way can be valuable–and when it comes to airplane manufacturing, overengineering and redundancy in the name of safety seems important. 
There is also a certain kind of deep expertise that can be developed by in-house professonals. Teller writes:

In 1989, the information scientist Michael J. Prietula and the Nobel economist Herbert A. Simon published an article in the Harvard Business Review, “The Experts in Your Midst,” on the wealth of knowledge and capabilities that underappreciated specialists in an organization possess. Theoretically, a lean, agile company might try to substitute outside consultants and temporary workers for in-house talent. Yet because in-house professionals bring years of tacit knowledge to problems, they paradoxically may be better equipped to find solutions than experts unfamiliar with the organization’s workings. The sociologist Chandra Mukerji has even argued that the government sponsors academic oceanography generously not because its findings are directly applicable to, say, Navy operations, but because its support creates a reserve army of scientific experts for urgent needs.

There\’s no single \”right\” answer to whether a company should be flexible or deep. Factoryless may work just fine for some firms. Deep in-house expertise may be important to others. Teller cites Google as a leading example of a modern \”deep\” firm. He writes:

From the perspective of 2020, the state of deep organization in American industry is not as discouraging as it appeared a generation earlier. Google, still an obscure academic startup in the late 1990s, is the old AT&T’s successor as a deep organization, hegemonic in the new field of online search as the Bell System had been in telecommunications. In 2017, fully 16 percent of Google employees held PhD degrees, over three times the proportion at Microsoft, Apple and Amazon. If a measure of a deep organization is its efforts to plan the future privately, it is hard to think of any 20th-century corporation’s plans as ambitious (and controversial) as those of a Google subsidiary for creating a network-controlled smart city in Toronto.

Although it\’s not one of Teller\’s themes, I would also add that the flexible corporation has strong incentives for how it invests in the human capital of its workers. When a deep firm expects many of its workers to stay for a substantial period of time, it has an incentive to invest in their skills and training, to view them as candidates for future promotions, to encourage them to build up specific knowledge about the company, and in general to envision the company as made up of people who are building longer-term careers at that company. When a flexible firm buys from outside, it doesn\’t need to care much about those who work for its suppliers. The notion of firm and employee making investments in a long-term career is diminished. Workers instead need to think semi-continually about where their next job will be, and firms need to think semi-continually about hiring from outside to replace departing workers. The notion that many workers will eventually settle into a career progression of rising skills and pay with a single employer comes to seem outdated and obsolete. That, too, is a tradeoff of the flexible firm.

Some Thoughts about Populism

\”Populism\” is remarkably slippery to define, but many people claim to know it when they see it–and to worry about its resurgence. Here, I\’ll offer some thoughts about the current populist moment. I\’ve spent some time thinking about this lately because the Journal of Economic Perspectives, where I work as Managing Editor, published a four-paper \”Symposium on Modern Populism\” in the Fall 2019 issue. The papers are:

Also, the Centre for Economic Policy Research in London has started a Research and Policy Network on Populism, and recently published four short and readable essays on the topic at its VoxEU website:

Defining Populism? 

Definitions of \”populism\” are sometimes very broad. For example, the Merriam-Webster definition is that it refers to a party \”claiming to represent ordinary people.\”  This has an element of truth, but it\’s difficult to think of a successful political party which would not make such a claim! Other definitions focus on a party that wants to redistribute to the poor. Again, this has an element of truth, but it seems overly broad.

The politicians who are commonly referred to as populists do claim to represent the ordinary people and the poor, but they also have an us-vs.-them edge. It\’s not just wanting to  help the poor, but also a broader narrative that ordinary people are being ill-treated by identifiable villains. Sometimes the designated villains are economic, like big corporations and the rich. Sometimes the designated villains have a foreign tinge, like those who allow imports of foreign products or a surge of immigrants. Populism typically involves both an economic claim that ordinary people are being left behind or mistreated, and also a broader political/cultural dimension that elites don\’t understand and are taking advantage.

Part of what makes \”populism\” hard to define is that it is often used as a criticism. The implication is that populists aren\’t just people who would advocate higher taxes on the rich or on corporations, but people who would demonize or practice confiscatory policies toward those groups. Populists wouldn\’t just argue for lower imports or enhanced border controls, but would describe these changes in terms of gross unfairness, plotted by the few against the many.

In short, the concern is that populists aren\’t technical analysts, arguing over the costs and benefits of shifting some policy parameters to help ordinary people or the poor. Instead, populists are whipping up surges of emotion to gain political power, while making political and social divisions worse and making policy promises that either can\’t be kept and won\’t work. Once populist emotions are fully roused, the polity may become disdainful of boring and irrelevant ideas like constraints on executive power, or an independent judiciary, media, and central bank. As the populist policy prescriptions inevitably fail, it may just feed the fire of populism further–by supposedly showing that the enemies of the ordinary people are even stronger than suspected, and must be countered with even more strong-handed interventions by a charismatic and authoritarian leader.

Economic Manifestations of Populism

For economists, perhaps the classic view of \”populism\” is based on a common pattern in Latin America, described in work by Rudiger Dornbusch and Sebastian Edwards back in the 1990s. They argued that populist regimes used a variety of policies–protectionism, agrarian reforms, controls and regulations, and the nationalization of large companies–but they argue that perhaps defining theme was a strong rise government spending.

At first, this rise in spending of often stimulated the economy, and in some cases a populist leader also had good economic luck –like an oil-exporting economy experiencing a rise in the price of oil. At this stage, there was often a lot of preening about how the populist prescription worked. There are often price controls to assure that everything remains affordable, and inflows of imported products as well. But as government budget deficits climb, the inflow of imports increases trade deficits, and the price controls and regulations and nationalizations start to choke off economic flexibility, problems start arising: shortages of goods and black markets, wages not keeping up with soaring inflation, macroeconomic problems repaying debt. Of course, a populist leader can use all these problems  to claim that even more extreme policies are needed, but economics is not a subject that greatly respects one\’s wishes. Eventually there is a crash and a clean-up.

Venezuela, or perhaps Greece, offer some recent examples of this kind of populism. But when talking about economic roots of modern populism, most commenters have in mind something less extreme. They are talking about communities that suffered economically during the Great Recession, or that suffered from the China import shock of the early 2000s, or that feel that their jobs and local cultural patterns are threatened by a surge of immigrants. As a result, the argument goes, they are motivated to vote for leaders who certainly aren\’t the same as classic  Latin American populists like Juan Peron in Argentina or Hugo Chavez in Venezuela, but for politicians and causes that seem to have a populist tinge: for Brexit in the UK, or the Alternative für Deutschland in Germany, the Sweden Democrats in Sweden, or for Donald Trump in the United States. The JEP essay by Italo Colantone and Piero Stanig goes through the connections from economic disruptions to support for populist parties in the context of European countries.

Cultural Manifestations of Populism

The economic roots of populism clearly have some explanatory power, but one can raise legitimate questions about whether they are the core driving force of modern populism.

For example, Yotam Margalit in his JEP essay points out that a number of studies focus on, say, how many votes for Brexit or President Trump and be traced to communities that were most severely hit by the China import shock or the Great Recession. One can often make a case that these economic events shifted  few percentage points of the vote, and so in a close election, they may have tipped the balance. But as Margalit notes, saying that a negative economic event changed voting patterns by a few percentage points doesn\’t explain the entire rest of the vote. When explaining support for populism, it seems important to consider not just the economic factors that affected 2-3% of the vote and tipped the balance in an election, but also the other 48-49% of the vote which did not depend on those economic factors. From this viewpoint, economic factor matter, but they are far from the entire story.

Concern over immigration is clearly a major issue uniting many modern populist parties. But it\’s not obvious that the economic consequences of immigration are the real issue here. It turns out that anti-immigration sentiment is often stronger in areas that have experienced negative economic shocks –whether or not those area have actually experiences more immigration. It also turns out that in public opinion surveys about immigration, anti-immigration sentiment is often much stronger if the questions specifically ask about immigrants who don\’t speak the language of the new country or who come from countries with different cultural or religious contexts. Margalit describes an alternative to a view that emphasizes economic causes as the roots of populism:

On this view, long-term structural social developments— increased access to higher education, growing ethnic diversity, urbanization, more equal gender roles—have led to greater acceptance of diverse lifestyles, religions, and cultures. These changes, and the perceived displacement of traditional social values, have caused a sense of resentment among segments of the population in the West, particularly among white men, older people, conservatives, and those with less formal qualifications. Increased exposure to foreign influences that comes with globalization, and even more so the effects of waves of immigration, has exacerbated the sense of a cultural and demographic threat. As a result, formerly predominant majorities have felt their social standing erode and have become increasingly receptive to populist charges against a disconnected, cosmopolitan elite that has turned its back on them. They have also bought into the populist nostalgia for a “golden age” of cultural homogeneity, traditional values, and a strong national identity. Hard economic times undermine the perceived competence of the economic and political elites and thus help fuel the populist distrust in them. Yet by this account, adverse economic change is a contributing factor and possibly a trigger. However, is not the root cause of widespread populist support.

Is Modern Populism Left or Right?  

In a US context, my sense is that populism has some appeal on all sides of the political spectrum, albeit in different ways.

For example, President Trump sounds populist notes with his \”Drain the Swamp,\” \”Fake News,\” and \”Build the Wall\” rhetoric. He seeks to build an image of himself as working on behalf of the ordinary people, who have not had their interests protected by international trade agreements and a lack of border enforcement. Populist leaders often seek to borrow and spend to stimulate the economy, and believe that the central bank should subordinate itself to this agenda. Trump follows these paterns. Populists often try to expand executive power, and take whatever actions they can by fiat, while lashing out at other institutions like Congress and the media. Trump has a PhD in lashing out.

But it seems clear that many Democratic politicians are offering appeals with a populist tone, too.  For example, the rhetoric from Senators Warren and Sanders about protecting the ordinary people from the predations of capitalism, corporate management, Wall Street, and the global economy often has a Trumpian tone. If populism is defined in part by politicians who promise dramatically higher spending, Warren and Sanders fit the bill. If populism is about expanding executive power, President Obama was the one who memorably stated \”I\’ve got a pen, and I\’ve got a phone,\” meaning that he would advance his policy agenda without working through Congress or the court system. Just as Trump has use executive authority to undo a number of the Obama administration pen-and-phone directives, Warren, Sanders, Biden, and others are promising to use executive power reinstate them and to add others of their own. When Democrats suggest packing the Supreme Court, ending the electoral college, and monitoring or limiting what political commentary will be allowed online or in advertisements, they are pushing back on  some of the established institutional constraints on power.

Of course, none of this is to say that leading Democratic candidates are \”just like\” President Trump. (No prominent American politician in my lifetime, of any party, is \”just like\” Trump.) For example, it seems plausible to argue that the Democrats are more likely to enunciate their views in the language of  economic populism, while Republicans are more likely to enunciate their views in the language of cultural populism. But there is some overlap here, and my point is that some deeper elements of the populist stance attract public support across party lines.

Is Modern Populism a Sign of a New Political Divide? 

For much of my lifetime, it has been common to characterize the main US political divide as labor vs. capital, or workers vs. corporations. But the divisions of modern populism don\’t quite work in this way. Colantone and Stanig describe the shift in this way:

[T]he recent political shifts may reflect a structural realignment of social groups and parties along new political dividing lines, which might be here to stay. In the half century after World War II, the politics of advanced western European democracies were structured to a large extent by a conflict between labor and owners of capital, and took the form of choices between more reliance on markets and deeper state intervention in the context of European economic and political integration. In the coming years, political conflict might capture a fundamental contraposition between winners and losers of structural changes in the economy, and may be centered mainly on a cosmopolitan versus nationalist conflict. The result could well be a credible restructuring of current traditional parties or the emergence of new parties that might assemble social constituencies in favor of inclusive globalization and technological progress. As such changes occur, the representation of vulnerable segments of society is not bound to be a prerogative of economic nationalist and radical-right forces. The challenge for believers in liberal policies is how to popularize a version of embedded liberalism that will be responsive to the current challenges of slow growth and structural economic shifts. 

Other writers have touched on a similar theme, but focusing on how modern societies are being sorted into urban areas where people live in a multicultural and international day-to-day world, often with socially liberal values, and who are reaping many of the benefits of economic growth, and those who are living in smaller cities and rural areas and see their local economy as stagnating and their place in society as diminishing.

The fundamental challenge is that we are living in a time of powerful underlying changes: in technology, communication, globalization, corporate structure, jobs, geographical sorting, marital sorting, an aging population, environmental dangers, and others. The stresses created by these changes are real, and addressing them is hard. But the populist impulse, whether it arrives from the right or the left, is rooted in would-be authoritarian executives who stoke us-vs.-them social divisions while trumpeting their unrealistic or harmful policies as an easy answer. 

Interview with Maureen Cropper: Environmental Economics

Catherine L. Kling and Fran Sussman have \”A Conversation with Maureen Cropper\” in the
Annual Review of Resource Economics (October 2019, 11, pp. 1-18). As they write in the introduction: Maureen has made important contributions to several areas of environmental economics, including nonmarket valuation and the evaluation of environmental programs. She has also conducted pioneering studies on household transportation use and associated externalities.\” There also is a short (~ a dozen paragraphs) overview of some of Cropper\’s best-known work.

I had not know that Cropper identified as a monetary economist when she was headed for graduate school. Here is her description of her early path to environmental economics:

My first formal introduction to economics was in college. I entered Bryn Mawr College in 1966. I had great professors at Bryn Mawr: Philip W. Bell, Morton Baratz, and Richard DuBoff. I learned microeconomics by reading James Meade\’s A Geometry of International Trade—that\’s how we were taught microeconomics by Philip Bell. It was really a very good grounding in economics. I got married as I graduated from college to Stephen Cropper (hence my last name), and I went to Cornell University because Stephen was admitted to the Cornell Law School. I was admitted to the Department of Economics at Cornell.

Frankly, my interests at the time were really in monetary economics, so I took several courses at the Cornell Business School, including courses in portfolio theory. My dissertation was on bank portfolio selection with stochastic deposit flows. My dissertation advisor was S.C. Tsiang. Henry Wan and T.C. Liu were also on my committee. Henry was a fantastic mentor and advisor. I would write a chapter of my dissertation and put it in his mailbox; the next day he would have it covered with comments. He was just an amazing advisor and very, very engaged. At this time, I was not doing anything in environmental economics. In fact, my first job offer was from the NYU Business School.

The reason I went into environmental economics is that I met Russ Porter in graduate school. Russ later became the father of my children. We decided that we would go on the job market together and looked for a place that would hire two economists. We wound up at the University of California, Riverside, which at the time was the birthplace of the Journal of Environmental Economics and Management (JEEM). I was on the job market in 1973, just when this journal was launched. Ralph d\’Arge was the chair of the department then. Tom Crocker also taught there, and Bill Schulze and Jim Wilen were students in the department.

It was going to UC Riverside that really caused me to switch fields and go into environmental economics. It was a very important decision, although I must say it was made partly for personal reasons. It\’s had a huge impact on my life.

In the interview and overview, it quickly becomes apparent that Cropper has worked on an unsummarizably wide array of topics. Examples include stated preference studies to estimate the value of a statistical life, which became the basis for estimates used by the OECD and in Canada.  Another study, became the basis for EPA regulations that estimated the value of avoiding a case of chronic bronchitis through air pollution regulations. Cropper worked on whether or not to ban certain pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act, what methods to use in cleaning up Superfund sites, and whether to ban  certain uses of asbestos in under the Toxic Substances Control Act (TSCA). She worked how to use trading allowances to reduce sulfur dioxide (SO2) emissions under the Acid Rain Program.

Cropper worked on many issues involving air pollution in India. She worked on models of estimating household location choices in Baltimore and in Mumbai. The studies in Mumbai became the basis for looking at other policies: slum relocation or converting buses to compressed natural gas. She has estimated how the shapes of cities affect demand for travel, and studied cross-country data on relationships from growth to traffic fatalities.  The interview touches on these topics and more. Here\’s a description of one such study from Cropper:

When I first got to the World Bank I realized that, in India, there hadn\’t been any state-of-the-art studies on the impact of air pollution on mortality. This was around 1995, the time when important cohort studies by Arden Pope and Douglas Dockery were coming out in the United States.

There is also literature looking at the impact of acute exposures to air pollution—daily time-series studies of the impact of air pollution on mortality. With the support of the Bank, I was able to get information in Delhi from air quality monitors—four years of daily data, although monitoring was not done every day. I was also able to obtain data on deaths by cause and age. I worked with Nathalie Simon and Anna Alberini to carry out a daily time-series study of the impact of air pollution on mortality. …

We had a hard time getting the study published in an epidemiological journal because economists write up their results differently than epidemiologists. But we did document significant effects of particulate matter on mortality. And, it was important to do something early on and convince people in India that this sort of work could be done. (There have been many subsequent studies.) It is also interesting that the results we obtained in Delhi were similar to results obtained in other time-series studies in the United States.

When those at the EPA or the World Bank or the National Academy of Sciences were setting up an advisory committee or a consensus panel to produce a report or an evaluation, Cropper\’s name was perpetually on the short list. Her memory of one such experience gives a sense of why she has been in such high demand:

I learned so much in my time serving on the EPA Science Advisory Board. I actually began there in the 1990s when the retrospective analysis of the Clean Air Act—the first Section 812 study—was being written. Dick Schmalensee was the head of that committee. I actually chaired the review of the first prospective 812 study of the benefits and costs of the 1990 Clean Air Act Amendments.

I also preceded you, Cathy, as the head of the Environmental Economics Advisory Committee at EPA. I learned a lot being on these EPA committees. In terms of the 812 studies, you\’ve got a subcommittee that\’s dealing with the health impacts: epidemiologists and toxicologists. You have air quality modelers and people who are exposure measurement experts. And of course, you also have economists. It\’s a fantastic opportunity to be exposed to all parts of the analysis. If you are concerned about air pollution policy, which is what I\’ve worked on the most, you need to get the perspective of all of these different disciplines. 

I was also interested in Cropper\’s comments on how, in the area of environmental economics, theoretical research has diminished and empirical work has become more prominent. My sense is that this is broadly true for many fields of economics. Cropper says 

I think quasi-experimental econometrics are one of the things that graduate students really do learn nowadays. Graduate students are also learning structural approaches. If you want to estimate the welfare impacts of corporate average fuel economy (CAFE) standards on the new car market, you\’ve got to use a structural model. You also have students who study empirical industrial organization, bringing those techniques to bear in environmental economics. In terms of the percentage of work that is done today that is more theoretically based, my impression is that theoretical research really has declined, in terms of the number of purely theoretical papers written or even papers that are using theoretical approaches.\\

The emphasis on theory has also changed during the time I have been teaching. When I was teaching a graduate class a few years ago, we were talking about discounting issues and, of course, the Ramsey formula. Students had heard of the Ramsey formula, but when I asked students if they knew who Frank Ramsey was, I was surprised to find that they didn\’t know. The fact is, I think there has been this shift. When I teach environmental economics, the preparation of students in terms of econometric techniques is really quite impressive. I\’ve got to say that has really been ramped up. That represents an important change in the profession … 

Fall 2019 Journal of Economic Perspectives Available Online

I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided–to my delight–that it would be freely available on-line, from the current issue back to the first issue. You can download it various e-reader formats, too. Here, I\’ll start with the Table of Contents for the just-released Fall 2019 issue, which in the Taylor household is known as issue #130. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the next week or two, as well.

_________________

Symposium on Fiftieth Anniversary of the Clean Air and Water Acts

\”What Do Economists Have to Say about the Clean Air Act 50 Years after the Establishment of the Environmental Protection Agency?\” by Janet Currie and Reed Walker

Air quality in the United States has improved dramatically over the past 50 years in large part due to the introduction of the Clean Air Act and the creation of the Environmental Protection Agency to enforce it. This article is a reflection on the 50-year anniversary of the formation of the Environmental Protection Agency, describing what economic research says about the ways in which the Clean Air Act has shaped our society—in terms of costs, benefits, and important distributional concerns. We conclude with a discussion of how recent changes to both policy and technology present new opportunities for researchers in this area.
Full-Text Access | Supplementary Materials

\”Policy Evolution under the Clean Air Act,\” by Richard Schmalensee and Robert N. Stavins

The US Clean Air Act, passed in 1970 with strong bipartisan support, was the first environmental law to give the federal government a serious regulatory role, established the architecture of the US air pollution control system, and became a model for subsequent environmental laws in the United States and globally. We outline the act\’s key provisions, as well as the main changes Congress has made to it over time. We assess the evolution of air pollution control policy under the Clean Air Act, with particular attention to the types of policy instruments used. We provide a generic assessment of the major types of policy instruments, and we trace and assess the historical evolution of the Environmental Protection Agency\’s policy instrument use, with particular focus on the increased use of market-based policy instruments, beginning in the 1970s and culminating in the 1990s. Over the past 50 years, air pollution regulation has gradually become more complex, and over the past 20 years, policy debates have become increasingly partisan and polarized, to the point that it has become impossible to amend the act or pass other legislation to address the new threat of climate change.
Full-Text Access | Supplementary Materials

\”US Water Pollution Regulation over the Past Half Century: Burning Waters to Crystal Springs?\” by David A. Keiser and Joseph S. Shapiro

In the half century since the founding of the US Environmental Protection Agency, public and private US sources have spent nearly $5 trillion ($2017) to provide clean rivers, lakes, and drinking water (annual spending of 0.8 percent of US GDP in most years). Yet over half of rivers and substantial shares of drinking water systems violate standards, and polls for decades have listed water pollution as Americans\’ number one environmental concern. We assess the history, effectiveness, and efficiency of the Clean Water Act and Safe Drinking Water Act and obtain four main conclusions. First, water pollution has fallen since these laws were passed, in part due to their interventions. Second, investments made under these laws could be more cost effective. Third, most recent studies estimate benefits of cleaning up pollution in rivers and lakes that are less than the costs, though these studies may undercount several potentially important types of benefits. Analysis finds more positive net benefits of drinking water quality investments. Fourth, economic research and teaching on water pollution are relatively uncommon, as measured by samples of publications, conference presentations, and textbooks.
Full-Text Access | Supplementary Materials

Symposium on Modern Populism
\”On Latin American Populism, and Its Echoes around the World,\” by Sebastian Edwards

In this article, I discuss the ways in which populist experiments have evolved historically. Populists are charismatic leaders who use a fiery rhetoric to pitch the interests of \”the people\” against those of banks, large firms, multinational companies, the International Monetary Fund, and immigrants. Populists implement redistributive policies that violate the basic laws of economics, and in particular budget constraints. Most populist experiments go through five distinct phases that span from euphoria to collapse. Historically, the vast majority of populist episodes end up badly; incomes of the poor and middle class tend to be lower than when the experiment was launched. I argue that many of the characteristics of traditional Latin American populism are present in more recent manifestations from around the globe.
Full-Text Access | Supplementary Materials

\”Informational Autocrats,\” by Sergei Guriev and Daniel Treisman

In recent decades, dictatorships based on mass repression have largely given way to a new model based on the manipulation of information. Instead of terrorizing citizens into submission, \”informational autocrats\” artificially boost their popularity by convincing the public they are competent. To do so, they use propaganda and silence informed members of the elite by co-optation or censorship. Using several sources, including a newly created dataset on authoritarian control techniques, we document a range of trends in recent autocracies consistent with this new model: a decline in violence, efforts to conceal state repression, rejection of official ideologies, imitation of democracy, a perceptions gap between the masses and the elite, and the adoption by leaders of a rhetoric of performance rather than one aimed at inspiring fear.
Full-Text Access | Supplementary Materials

\”The Surge of Economic Nationalism in Western Europe,\” by Italo Colantone and Piero Stanig

We document the surge of economic nationalist and radical-right parties in western Europe between the early 1990s and 2016. We discuss how economic shocks contribute to explaining this political shift, looking in turn at theory and evidence on the political effects of globalization, technological change, the financial and sovereign debt crises of 2008–2009 and 2011–2013, and immigration. The main message that emerges is that failures in addressing the distributional consequences of economic shocks are a key factor behind the success of nationalist and radical-right parties. We discuss how the economic explanations compete with and complement the \”cultural backlash\” view. We reflect on possible future political developments, which depend on the evolving intensities of economic shocks, on the strength and persistence of adjustment costs, and on changes on the supply side of politics.
Full-Text Access | Supplementary Materials

\”Economic Insecurity and the Causes of Populism, Reconsidered,\” Yotam Margalit

Growing conventional wisdom holds that a chief driver of the populist vote is economic insecurity. I contend that this view overstates the role of economic insecurity as an explanation in several ways. First, it conflates the significance of economic insecurity in influencing the election outcome on the margin with its significance in explaining the overall populist vote. Empirical findings indicate that the share of populist support explained by economic insecurity is modest. Second, recent evidence indicates that voters\’ concern with immigration—a key issue for many populist parties—is only marginally shaped by its real or perceived repercussions on their economic standing. Third, economics-centric accounts of populism treat voters\’ cultural concerns as largely a by-product of experiencing adverse economic change. This approach underplays the reverse process, whereby disaffection from social and cultural change drives both economic discontent and support for populism.
Full-Text Access | Supplementary Materials

Articles

\”What They Were Thinking Then: The Consequences for Macroeconomics during the Past 60 Years,\” by George A. Akerlof

This article explores the development of Keynesian macroeconomics in its early years, and especially in the Big Bang period immediately after the publication of The General Theory. In this period, as standard macroeconomics evolved into the \”Keynesian-neoclassical synthesis,\” its promoters discarded many of the insights of The General Theory. The paradigm that was adopted had some advantages. But its simplifications have had serious consequences—including immense regulatory inertia in response to massive changes in the financial system and unnecessarily narrow application of accelerationist considerations (regarding inflation expectations).
Full-Text Access | Supplementary Materials

\”The Impact of the 2018 Tariffs on Prices and Welfare,\” by Mary Amiti, Stephen J. Redding and David E. Weinstein

We examine conventional approaches to evaluating the economic impact of protectionist trade policies. We illustrate these conventional approaches by applying them to the tariffs introduced by the Trump administration during 2018. In the wake of this increase in trade protection, the United States experienced substantial increases in the prices of intermediates and final goods, dramatic changes to its supply-chain network, reductions in availability of imported varieties, and the complete pass-through of the tariffs into domestic prices of imported goods. Therefore, the full incidence of the tariffs has fallen on domestic consumers and importers so far, and our estimates imply a reduction in aggregate US real income of $1.4 billion per month by the end of 2018. We see similar patterns for foreign countries that have retaliated with their own tariffs against the United States, which suggests that the trade war has also reduced the real income of these other countries.
Full-Text Access | Supplementary Materials

\”Retrospectives: Tragedy of the Commons after 50 Years,\” by Brett M. Frischmann, Alain Marciano and Giovanni Battista Ramello

Garrett Hardin\’s \”The Tragedy of the Commons\” (1968) has been incredibly influential generally and within economics, and it remains important despite some historical and conceptual flaws. Hardin focused on the stress population growth inevitably placed on environmental resources. Unconstrained consumption of a shared resource—a pasture, a highway, a server—by individuals acting in rational pursuit of their self-interest can lead to congestion and, worse, rapid depreciation, depletion, and even destruction of the resources. Our societies face similar problems, with respect to not only environmental resources but also infrastructures, knowledge, and many other shared resources. In this article, we examine how the tragedy of the commons has fared within the economics literature and its relevance for economic and public policies today. We revisit the original piece to explain Hardin\’s purpose and conceptual approach. We expose two conceptual mistakes he made: conflating resource with governance and conflating open access with commons. This critical discussion leads us to the work of Elinor Ostrom, the recent Nobel Prize in Economics laureate, who spent her life working on commons. Finally, we discuss a few modern examples of commons governance of shared resources.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor