China\’s Belt and Road Initiative: Could It All Come Crashing Down?

Brad Parks at the Center for Global Development managed to find a nice way of boiling down the ways that China\’s Belt and Road Initiative could possibly become a failure and a burden in a short thought experiment (\”Chinese Leadership and the Future of BRI: What Key Decisions Lie Ahead?\” July 24, 2019). Here\’s how he describes one possible future a decade from now (I added a couple of paragraph breaks):

It’s 2028. The Belt and Road Initiative (BRI) has been underway for 15 years, but the initial enthusiasm and momentum behind BRI has vanished. Many of the governments that initially joined the initiative have publicly withdrawn or quietly wound down their participation. China’s staunchest allies remain engaged but even they have reservations about the wisdom of the initiative. They are saddled with unproductive public investment projects and struggling to service their debts. Domestic public sentiment towards China has soured, and they have come to view their participation in BRI as more of a political liability than an asset. But they worry about the consequences of alienating their most important patron and creditor. China has also assumed a defensive posture. Lacking the goodwill that it possessed at the beginning of BRI, it is now using inducements and threats to prevent its remaining clients from abandoning the initiative. 

Western donors and lenders watch from the sidelines with a sense of bemusement. They encouraged China to “multilateralize” BRI by establishing a common set of project appraisal standards, procurement guidelines, fiduciary controls, and social and environmental safeguards that other aid agencies and development banks could support. But Beijing chose to go it alone. It opted not to embrace the use of economic rate-of-return analysis to vet project proposals; resisted efforts to harmonize its environmental, social, and fiduciary safeguards with those used by aid agencies and development banks outside of China; and pushed back on the “Western” suggestion that it modernize its monitoring and evaluation practices. China bet that its fast and flexible approach to infrastructure finance would prove to be so compelling that traditional donors and lenders would eventually jump on the bandwagon and co-finance BRI projects. But it miscalculated. Its model was insufficiently attractive on its merits to enlist the participation and support of the other major players in the bilateral and multilateral development finance market. Nor was it sufficiently appealing to sustain elite and public support in partner countries.

China is now substantially weaker at home than it was during the early days of BRI. Its policy banks and state-owned commercial banks faced top-down and bottom-up pressure—from the State Council and cash-poor BRI participants—to fast-track big-ticket infrastructure projects with as little “red tape” as possible. But many of these projects were poorly designed, so the banks are now dealing with unsustainable stock of non-performing loans, leaving them with two equally unattractive choices: restructure the debts of the least creditworthy countries or throw good money after bad by offering new loans to help borrowers repay old loans. 

With lower-than-expected reflows entering the coffers of China Development Bank, China Eximbank, and the “big 4” state-owned commercial banks, the People’s Bank of China sees the writing on the wall and decides to intervene. It recapitalizes the banks once, and then again, and again. As the country’s foreign exchange reserves dwindle, fiscally conservative voices within China grow louder. They call upon the banks to rein in their overseas lending activities and find common cause with the Chinese public, which sees ample evidence of waste and corruption in BRI projects and has a declining appetite for overseas entanglements. As populist and isolationist pressures mount, China’s political leadership concludes that it no longer enjoys domestic support for its international leadership ambitions. 

Just to be clear, Parks isn\’t predicting that this scenario will definitely happen, only that it could happen. In my own mind, the central issue here is about the nature of debt. When debt is used to finance a successful project or investment, then the loan can be repaid on time and all parties are pleased. But if debt is used to finance a project that fails in narrow economic terms, and is also accompanied by a cavalcade of concerns over waste, political corruption, environmental dangers, and social harms, then the loan isn\’t going to be repaid on time (or perhaps at all) and all parties will be unhappy. When one country is both lending the money and often taking the lead in planning, building, and managing how the loaned funds are spend, while another country is repaying a loan that brought unwanted side-effects rather than promised benefits, the possibility for conflict and lasting bad feeling is very high.

With any project as enormous as the Belt and Road Initiative, it\’s safe to predict that there will be both successes and failures. But in many cases, other international lenders had already considered or were in the process of considering the infrastructure projects that China is supporting. The other international lenders had concerns about economic viability, or wanted to put in place certain conditions to reduce risks of corruption, environmental damage, or other unwanted side effects. China\’s lenders were willing to brush aside these concerns. The future of the Belt and Road Initiative may thus rest on whether other international lenders have been too timid about lending to projects that were good risks, or whether China\’s lenders have been too aggressive lending to projects that were poor risks.

I\’ve tried to capture some of this dynamic in previous posts on the subject (although not as crisply as in Parks\’s thought experiment):

For additional follow-up, I\’d recommend the article by Parks quoted above, and two other recent essays: \”Belt and Road Economics: Opportunities and Risks of Transport Corridors,\” a report released a couple of months ago by the World Bank; and \”Catching the Dragon: Responding to China’s Trillion Dollar Charm Offensive in Developing Countries,\” by Staci Warden, in the Milken Institute Review (Third Quarter 2019).

Is Opposition to Immigration Primarily Economic or Cultural?

It\’s clear that there is a considerable hostility to immigration, both in the United States and across much of Europe. Is that opposition rooted primarily in economic factors or in cultural factors? What kind of evidence could help answer the question?

One approach is to look at whether anti-immigrant attitudes are more common among occupations more threatened by immigrant competition or in local areas that have received more immigrants.  If so, this would support an economic explanation for anti-immigrant sentiment. Another approach is the \”survey experiment,\” which involves doing a survey with several different versions that differ in  what questions are asked, and thus seeing what factors are shaping people\’s attitudes. Both approaches suggest that that cultural factors than economic factors in anti-immigrant sentiment.

For a sampling of the evidence on this point, I\’ll draw upon some comments in a couple of papers in the Fall 2019 issue of the Journal of Economic Perspectives: \”The Surge of Economic Nationalism in Western Europe,\” by Italo Colantone and Piero Stanig (pp. 128-51) and \”Economic Insecurity and the Causes of Populism, Reconsidered, by Yotam Margalit (pp. 152-70). 

For example, Colantone and Stanig looked at areas that had received more immigrants and found: 

Public opinion research consistently finds that direct competition with immigrants on the labor market is not an important predictor of anti-immigration sentiments. Instead, anti-immigrant views are mostly driven by generalized fears of potential economic or social harm caused by immigration, perceived as a threat to national culture (Hainmueller and Hopkins 2014). … 

Empirical evidence suggests that economic hardship of different origin may be a more important predictor of anti-immigration sentiments than the actual presence of immigrants in a region. As one vivid example, immigration was one of the single most important issues motivating Leave voters in the Brexit referendum of 2016 (Ashcroft 2016; Ipsos MORI 2016). Yet there is no robust evidence of higher Leave vote shares in regions where a larger fraction of the population is foreign born, or where relatively more immigrants arrived in the years prior to the referendum (Colantone and Stanig 2018a). Consistently, our own empirical evidence shows that negative attitudes about immigration at the individual level are driven not by the share of foreign-born population in the region of residence, nor by the recent arrival rates of immigrants. Rather, what seems to explain nativist attitudes is contextual economic distress—for instance, driven by the exposure to the Chinese import shock (Colantone and Stanig 2018a). Economic distress also seems to drive more cultural concerns about immigration, such as the belief that immigrants do not make a positive contribution to the national cultural life (Colantone and Stanig 2018a, b). In a situation of poor economic performance, it can be easy and politically rewarding to blame immigrants, even when the underlying economic grievances have very different origins, as in the case of globalization. 

Margolit found that opposition to immigration doesn\’t seem linked with whether one works in an occupation more likely to be disrupted by immigrant labor:

Furthermore, my collaborators and I find that workers employed in very different segments of the labor market, such as meat-packing, education, and finance—differing in terms of skill specificity, penetration by foreign labor, and value added per worker—share remarkably similar preferences in terms of the skill profile of the immigrants they are willing (or not) to accept (Hainmueller, Hiscox, and Margalit 2015). This finding does not sit well with a prediction that natives will be more opposed to immigrants with skill levels similar their own, or indeed with any model that predicts that different segments of native workers will have different preferences regarding the desired type of immigrants.

Margolit also describes \”survey experiments,\” where several slightly different surveys are given, and then the results can be compared to get a sense of what is affecting people\’s beliefs–especially about issues like bias against immigrants of different ethnic or cultural background. Here\’s an example: 

For example, a survey experiment in the United Kingdom varied the information it provided to participants about the skill mix of immigrants coming into the country, their region of origin, and the impact of immigration numbers on the long-term share of white Britons. The study finds that even when controlling for the information about skill mix and region of origin, the very mention of the immigrants’ impact on the share of white Britons almost halves support for current immigration levels (reducing it by 17–22 percentage points to about 20 percent of the public) (Kaufmann 2018). Experiments conducted in the United States find a similar effect, in which prompting (or reminding) white Americans about the impending racial shift and future loss of their majority status magnifies their racial bias, particularly toward Hispanics, and increases support for restrictive immigration policies (Craig and Richeson 2014; Major, Blodorn, and Blascovich 2018).

Another kind of survey experiment is a \”list experiment,\” which Margolit describes in this way: 

In a study using this method, Janus (2010) randomly divided a national sample of US non-Hispanic whites into two groups and asked them to read a list of several statements. After reading the list, respondents in both groups were asked to report the total number of statements they “oppose or are against,” without having to report their view on each specific statement. For the control group, the list included three statements on issues on which concerns with social desirability are unlikely to be a problem, such as whether or not they oppose “Professional athletes making millions of dollars per year.” For the treatment group, the list contained the same three nonsensitive statements, but with an addition of a fourth statement: “Cutting off immigration to the United States.” In this experiment, the difference in the mean number of statements reported by participants in the control group (1.77) and the mean number reported by participants in the treatment group (2.16) is attributable only to the additional sensitive item and to sampling error.

In doing surveys about attitudes concerning immigration, one of the strongest results is that those with less education are much more likely to be opposed to immigration, while those with more education are likely to favor immigration.  The key point here is that those with high education levels favor more immigration of those with both high and low skill levels, which presumably includes support for immigration of those who would be competing with them for jobs. Conversely, those with low education levels tend to oppose immigration of those with both high and low skill levels, which means they oppose immigration both of those who might be competing with them for jobs and also those who are unlikely to be competing with them. This pattern suggests cultural attitudes about immigration correlated with higher and lower levels of education are driving the results.

The message that current hostility to immigration is primarily cultural is consistent with recent research on US opposition to immigration a century ago. Marco Tabellini published \”Gifts of the Immigrants, Woes of the Natives: Lessons from the Age of Mass Migration,\” in the Review of Economic Studies, which is waiting in line in the form of corrected proof pages to be published (as of May 6, 2019).  Here\’s the abstract of Tabellini\’s article:

In this article, I jointly investigate the political and the economic effects of immigration, and study the causes of anti-immigrant sentiments. I exploit exogenous variation in European immigration to U.S. cities between 1910 and 1930 induced by World War I and the Immigration Acts of the 1920s, and instrument immigrants’ location decision relying on pre-existing settlement patterns. I find that immigration triggered hostile political reactions, such as the election of more conservative legislators, higher support for anti-immigration legislation, and lower redistribution. Exploring the causes of natives’ backlash, I document that immigration increased natives’ employment, spurred industrial production, and did not generate losses even among natives working in highly exposed sectors. These findings suggest that opposition to immigration was unlikely to have economic roots. Instead, I provide evidence that natives’ political discontent was increasing in the cultural differences between immigrants and natives. Results in this article indicate that, even when diversity is economically beneficial, it may nonetheless be socially hard to manage.

In the United States, as in many other countries, immigration is a relatively small factor in its effect on either levels of wages or inequality of wages. (For discussion of the US situation in particular. see  Giovanni Peri, \”Immigrants, Productivity, and Labor Markets,\” Journal of Economic Perspectives, Fall 2016, 30:4, pp. 3-30.) The enormous political energy surrounding immigration issues grows from non-economic roots.

Can Job Training Work for Mature Adults?

Most workers learn additional skills throughout their work-life. In that sense, it\’s obvious that job training in some contexts works for adults. But for many of us, the additional learning happens in the context of remaining in the same job, or at least on the same broad career path. The harder question is whether it\’s possible to do the kind of job training with adults, perhaps adults who have just experienced a negative shock to their previous job, that jump-starts them on a career path. Paul Osterman offers a useful discussion of these issues in \”Employment and training for mature adults: The current system and moving forward\” (Brookings Institution Future of the Middle Class Initiative, November 2019). Here are a few themes from his report that stuck with me: 

Overall Perspectives on Adult Training in the United States

The United States does not have a training system for adults if what is meant by the term “system” is a well-articulated set of programs or opportunities that fit together in a logical stepwise way and which are readily accessible to all those who are interested or need assistance. What the United States does have is a diverse set of opportunities, some large and some small, and for some of these we know what is best practice and for some we are in the dark about effectiveness. … According to the OECD in 2016 Germany spent six times more than the U.S. relative to GNP on pubic training programs and France spent ten times more.

Many Employers Seem to be Trying to Hand Off Job Training the Public Sector

Firms have long been at the center of the American system of job training and the classic model has been that schools provide general skills training while the skills for specific jobs are provided by employers. … [T]he historical division of labor between firms and public training organizations that has underwritten the U.S. system appears to be in doubt. Although the data are weak there is at least reason to believe that firms are expecting public training institutions to play a greater role than they have in the past in providing specific vocational training. … It is difficult to assess the extent to which this arrangement continues because reliable data on the firm based training are simply not available with the last national survey a decade old. … The rhetoric of the employer community regarding the importance of public training systems, especially community colleges, suggests a possible effort to shift training costs to the public sector. A benign explanation would be shorter job tenures which make recouping of training costs more difficult or, alternatively, that skills are becoming more general, perhaps due to the increased importance of computer skills. A less benign explanation is simply cost shifting. Until we have better data we will not have an answer about trends or explanations.

Community Colleges are Underfunded but Work Well–For Those Who Complete a Degree

The good news about community colleges is that when students complete a degree or certificate the rate of return is good. … All this said, there are important concerns regarding community colleges the most central of which is retention and graduation. Reported rates are weak—nationally 39.2 percent of community college full or part-time students who enrolled in 2012 earned a credential from either a two or four year school within six years of initial enrollment …

Community colleges are at scale but face several challenges, the first of which is that they are underfunded. Per pupil operating expenditures is less than half that of four year bachelor’s (not masters and not research) private colleges and state higher education funding streams have yet to attain levels in 2008 prior to the Great Recession. Another way of seeing this is to note thatt he U.S. Department of Education estimates that in 2016-2017 the instructional costs per full-time equivalent student in public 2-year institutions were about $6,900 compared to $12,700 in public four year schools. The consequences are both higher tuition levels, which challenge access, as well as inadequate support systems which have significant implications for retention and completion. 

Intermediary Systems, Where Industry or Employers Partner with an Intermediary or Community College Can Work Well

The core best practice components of intermediaries are close relationships with employers (the so-called dual customer model), support services and counseling for clients, and substantial investments in training. Depending on the specific program the actual training is either done by the intermediary itself or by a community college. If the training is the responsibility of the community college the intermediary works closely with that institution around issues of scheduling and support. In order to achieve the close relationship with employers intermediary staff become knowledgeable about the nature of the industry and the needs of employers. Intermediaries which adhere to this broad model may be sponsored by community groups, business associations, or unions. …  A strong example is Project QUEST in San Antonio which was subject to an RCT with a nine year follow up. QUEST exemplifies the best practice elements described above. From year three to year nine participants earned significantly more than the control group and by year nine the gap was over $5,000 per year in annual earnings. These impacts are not unique to QUEST and other rigorous evaluations of best practice intermediaries also find positive results.

We Don\’t Have Successful Models of Retraining Mature Adults Who Have Lost Their Jobs

Mature adults who lose their jobs are a distinctive group, representing a challenging population, for obvious reasons: on average lower levels of education and embedded personal and family commitments that often limit options. In addition the stigma of job loss is an additional burden given that potential employers may fear that it is a negative signal. There are very few credible assessments of training interventions for this group. The raw data from the Trade Adjustment Assistance Program (which only captures a subset of the dislocated worker population) are not encouraging: in 2017 72.5 percent of program participants entered employment post program participation and of these the earnings replacement ratio for 40-49 year olds was 83.9 percent and for 50-59 year olds 75.3 percent (the earnings replacement ratios were much better for young people).40 A Mathematica evaluation of TAA using the experience of the early 2000’s found between a zero and a negative impact on earnings although, again, younger participants did better.

We Don\’t Yet Have Good Evidence on Some New Alternative Training Models

[W]e have little understanding regarding the scope and performance of new training models—boot camps, on-line programs, and certificate programs offered by non-traditional institutions. It is clear that this is a heterogenous collection and it seems likely there is diversity in performance and in what lessons can be learned about delivering skill.

My own bottom line is that there\’s a lot of talk about how new entrants to the job market need more training, and how current workers and those between jobs need both training in their current work and sometime to pick a different career path. But the actual commitment to making this happen has not come close to matching the rhetoric. Two basic steps would to raise spending on community colleges by $20 billion or so, and to work much harder on involving business leaders in a conversation of what skills and training they are ready and willing to hire and promote. 

Is Trade Still a Viable Path to Development? World Development Report 2020

Many of the world\’s development success stories in recent decades followed a broadly similar pattern. The countries became more involved with the world economy, often by exporting manufactured goods produced by low-wage workers. The rise in exports brought economic growth and income to their economics, but perhaps just as important, it also helped to foster a range of managerial, financial and technological skills. In this way, exporting was a fundamental step on the path to economic development.

The World Development Report 2020, subtitled \”Trading for Development in the Age of Global Value Chains,\” asks whether that step is still available for countries trying to develop their economies.

International trade expanded rapidly after 1990, powered by the rise of global value chains (GVCs). This expansion enabled an unprecedented convergence: poor countries grew faster and began to catch up with richer countries. Poverty fell sharply. These gains were driven by the fragmentation of production across countries and the growth of connections between firms. Parts and components began crisscrossing the globe as firms looked for efficiencies wherever they could find them. Productivity and incomes rose in countries that became integral to GVCs—Bangladesh, China, and Vietnam, among others. The steepest declines in poverty occurred in precisely those countries.

Today, however, it can no longer be taken for granted that trade will remain a force for prosperity. Since the global financial crisis of 2008, the growth of trade has been sluggish, and the expansion of GVCs has slowed. The last decade has seen nothing like the transformative events of the 1990s—the integration of China and Eastern Europe into the global economy and major trade agreements such as the Uruguay Round and the North American Free Trade Agreement (NAFTA).

At the same time, two potentially serious threats have emerged to the successful model of labor-intensive, trade-led growth. First, the arrival of labor-saving technologies such as automation and 3D printing could draw production closer to the consumer and reduce the demand for labor at home and abroad. Second, trade conflict among large countries could lead to a retrenchment or a segmentation
of GVCs.

What does all this mean for developing countries seeking to link to GVCs, acquire new technologies,and grow? Is there still a path to development through GVCs?

World Bank reports often have a certain kind of can-do spirit. When you ask \”is there still a path to development?\” the answer is pretty much always an appropriate qualified \”yes.\” Thus, a substantial amount of the report offers useful evidence that developing countries (or regions of countries) which have become part of global value chains have indeed experienced substantial benefits. Moreover, the report argues that development countries which pursue domestic economic reforms and open trade, along with social and environmental protections,  can still benefit. There is the expected call for more and better-designed international trade agreements.Here, I want to focus on the theme  \”Technological Change\” and trade as discussed in Chapter 6 of the report.

Information and communications technology is making it much easier to coordinate production chains and suppliers across countries.

High-speed Internet enables firms in developing countries to link to GVCs [global value chains]. The introduction of fast Internet in Africa and China has spurred employment and export growth, as recent studies of the economic effects of the rollout have shown. In Africa, the gradual arrival of submarine Internet cables led to faster job growth (including for low-skilled workers) in locations that benefited from better access to fast Internet relative to those that did not, with little or no job displacement across space. Increased firm entry, productivity, and exporting are among the drivers of the higher net job creation in these locations. Similarly, in China provinces experiencing an increase in the number of Internet users per capita also witnessed faster export growth, with more firms competing in international markets and a higher share of provincial output sold abroad. These examples attest to the potential of ICTs [information and communication technologies] to help countries become part of international supply chains. They also show that the uneven provision of ICT infrastructure can aggravate spatial inequalities …

In many developing countries, the costs of dealing with customs as goods cross the border and are transported within countries is high. Digital technologies can help here, too.

Digital technologies can improve customs performance by automating document processing and making it possible to create a single window for streamlining the administrative procedures for international trade transactions. In Costa Rica, a one-stop online customs system increased both exports and imports. Similarly, in Colombia computerizing import procedures increased imports, reduced corruption cases, bolstered tariff revenues, and accelerated the growth of firms most exposed to the new procedures. …
Some robotics and artificial intelligence applications might further reduce logistics costs, the time to transport, and the uncertainty of delivery times. At ports, autonomous vehicles might unload, stack, and reload containers faster and with fewer errors. Blockchain shipping solutions may lower transit times and speed up payments. The Internet of Things has the potential to increase the efficiency of delivery services by tracking shipments in real time, while improved and expanded navigation systems may help route trucks based on current road and traffic conditions. Although the empirical evidence on these impacts is limited, it is estimated that new logistics technologies could reduce shipping and customs processing times by 16 to 28 percent … 

In addition, developing countries face large intranational trade costs, which determine the extent to which producers and consumers in remote locations are affected by changes in trade policy and international prices. For example, the effect of distance on trade costs within Ethiopia or Nigeria is four to five times larger than in the United States. Intermediaries capture most of the surplus from falling world prices, especially in more distant locations. Therefore, consumers in remote locations see only a small part of the gains from falling international trade barriers. Despite recent advances in the provision of ICT infrastructure, the scope for further expanding access to high-speed Internet in developing countries remains huge. …. For many goods traded in GVCs, a day’s delay is equal to imposing a tariff in excess of 1 percent.

Computerized translation between languages reduces a barrier to trade, too.

Machine learning also reduces the linguistic barriers to trade and GVC participation. One application of machine learning—machine translation—has improved in recent years. For example, the best score at the Workshop on Machine Translation for English to German rose from 15.7 to 28.3, according to a widely used comparison metric, the BLEU score. The introduction of machine translation from English to Spanish by eBay has significantly boosted international trade between the United States and Latin America on this platform, increasing exports by 17.5 percent. These effects reflect a reduction in translation-related search costs and show that artificial intelligence has already begun to boost trade in North and South America.

There have been concerns that in a world economy full of automation, robots, 3D printing, and other technology, that developing countries may face a risk of \”premature deindustrialization\”–that is, they won\’t be able to use their natural comparative advantage of cheap labor to enter global value chains in manufactured goods. But at least so far, robots and automation seem to be boosting international trade between high-income countries and emerging markets, rather than leading to \”reshoring\” of previous imported products and a reduction in trade.

Despite the concerns about the effects of automatization, the evidence that reshoring will result is very limited. … Thus far, the rising adoption of industrial robots and 3D printing seems to have promoted North–South trade. Greater robot intensity in production has led to more imports sourced from lower-income countries in the same broad industry—and to an even stronger increase in gross exports (which embody imported inputs) to those countries. The surge in imports from the South has been concentrated in intermediate goods such as parts and components. The positive impact of automation on imports, particularly on imports of intermediates, attests to the importance of examining the effects of robotization on trade through a GVC framework. More-traditional trade models would predict the increase in exports by the North but fail to foresee the surge in imports from the South in the same industry. Rather than reducing North–South trade, robotization seems to have been boosting it, although it is uncertain whether this trend is likely to continue.

The rise of information technology, and its effects on reducing transportation costs have caused a wave of not-previously-traded products to become part of international trade. In many cases, these are intermediate and unfinished goods where it now makes economic sense to ship them across national borders. In other cases, they are brand-new goods–even services or digital goods.

Since the 1990s, many new types of products have entered global trade, primarily intermediate goods, further demonstrating the increasing fragmentation of production and the emergence of entirely new  products. Indeed, the trade in new products has grown dramatically. In 2017, 65 percent of trade was in categories that either did not exist in 1992 or were modified to better reflect changes in trade. Trade in intermediate goods (parts and components and semifinished goods) expanded, and entirely new products entered global trade. For example, trade in IT products tripled over the past two decades, as trade in digitizable goods such as CDs, books, and newspapers steadfastly declined from 2.7 percent of the total goods trade in 2000 to 0.8 percent in 2018. Technological developments are likely to continue to produce product churning. Because of technological progress, more goods and services are likely to become tradable over time. For example, platforms such as Upwork and Mechanical Turk make it easier for businesses to outsource tasks to workers who can perform them virtually. And new goods and services are likely to be developed, including ones not even imaginable today, thereby boosting the incentives to trade.

One potential source of productivity for developing countries is that international trade raises the rewards for organizing into larger firms and increasing the scale of production:

In part because of high trade costs, firms in low-income countries tend to operate on a small scale and are less likely to export or import. A typical modal manufacturing firm in the United States has workers, and larger firms tend to be more productive and pay higher wages and are more likely to export and import. By contrast, a modal firm in most developing countries has one worker, the owner. Among firms that do hire additional workers, most hire fewer than 10. In India, Indonesia, and Nigeria, firms with fewer than 10 workers account for more than 99 percent of the total.

But behind all these legitimate reasons for overall enthusiasm, some potential problems lurk. It\’s interesting and useful to discuss the ways that technology is likely to increase global trade, and how it can still serve as a pathway to higher growth for developing countries. However, technology typically brings a mixture of winners and losers, both within and across countries. Some of the international trade success stories from developing economies will happen as potential competitors are crowded out. For a hint of these negative possibilities, ere are a couple of paragraphs from near the end of the chapter. I have inserted boldface type for the various qualifications, the \”maybes\” and the \”likelys\” and the tradeoffs. As is so often true when thinking about the effects of new technology, reading the passage without emphasizing these qualifications is a soothing and positive experience; reading it while emphasizing these qualifications is worrisome. Both reactions are valid!

Although predicting the future is a treacherous exercise, new technologies will likely reduce trade costs and make it easier to participate in global markets. Such outcomes may offer developing countries new opportunities to link into GVCs. However, the attendant intensification of competition may make it more challenging for countries to succeed. Platform firms, for example, are making it easier to connect, but their reputation mechanisms for verifying supplier quality tend to foster concentration and make it harder for entrants to grow. They are creating new challenges for regulators both because they wield market power and because their interactions with agents in different parts of the value chain may create potential conflicts of interest and enhance the scope for anticompetitive conduct.

Automation anxiety is not warranted for all developing countries. Although some countries are likely to lose manufacturing employment because of greater competition in output markets, countries that are part of GVCs and supplying inputs to other countries that are automating may see an increase in the demand for their goods, and consumers everywhere will enjoy lower prices. The primary challenge arising from new production technologies is to ensure that the benefits are shared and that losers are compensated both across and within countries. Among the countries adopting these technologies, labor market disruptions are likely to be significant, skill premiums are likely to rise, and labor’s share of income may decline further.

Some Economics of the Clean Water Act

Here\’s an uncomfortable set of facts about federal clean water policy in the United States: 1) People care about it a lot; 2) Over the years, total spending on clean water has been high; 3) Water quality has improved; and 4) The estimated benefits of clean water regulation in the US seem relatively low and in many cases even negative. My discussion here will draw on an essay by David A. Keiser and Joseph S. Shapiro, \”US Water Pollution Regulation over the Past Half Century: Burning Waters to Crystal Springs?\” Journal of Economic Perspectives, Fall 2019, 33:4, pp.  51-75.

Keiser and Shapiro offer some evidence from Gallup polls that clean water is traditionally near the top of environmental concerns.

The amount spent on clean water legislation has been substantial. They write:

Over the period 1970 to 2014, we calculate total spending of $2.83 trillion to clean up surface water pollution, $1.99 trillion to provide clean drinking water, and $2.11 trillion to clean up air pollution (all converted to 2017 dollars). Total spending to clean up water pollution exceeded total spending to clean up air pollution by 70 to 130 percent. … Since 1970, the United States has spent approximately $4.8 trillion (in 2017 dollars) to clean up surface water pollution and provide clean drinking water, or over $400 annually for every American. In the average year, this accounts for 0.8 percent of GDP, making clean water arguably the most expensive environmental investment in US history. For comparison, the average American spends $60 annually on bottled water … 

The quality of water has improved. For example, the share of wastewater and industrial discharge being treated has risen.One common measure is whether the water is \”fishable,\” and the share of water \”not fishable\” has been declining.

But here\’s a kicker: the benefit-cost ratios for cleaning up water, especially surface water, don\’t look as good as the ratios for cleaning up air pollution. The first column looks at benefit-cost analyses for cleaning up surface water, typically under the Clean Water Act, which supported sewage treatment plants and regulates facilities discharging waste from a \”fixed source,\” like a pipe, into navigable waters. The second column looks at benefit-cost ratios for rules about cleaning up drinking water, typically under the Safe Drinking Water Act, which sets and enforces drinking water standard and also has a say in cleaning up groundwater.

The perhaps startling pattern is that the benefit-cost ratios for surface water rules are typically less than one, meaning that benefits are below costs. For drinking water, the benefit-cost ratios on average exceed one, but it\’s still true that 20% of the rules have a benefit-cost ratio below one. Rules about air pollution have much better benefit-cost ratios.

So what\’s going on here? Here are some thoughts:

1) The rules listed here are often about additions to the earlier rules. Thus, it\’s possible that earlier rules about protecting surface water and drinking water had better benefit-cost ratios, but now that some of the worse problems have been addressed, the benefit-cost ratios for additional rules are lower.

2) For surface water rules, in particular, the \”benefits\” in these studies rarely involve human health. Reducing illness and saving lives in humans is where the big benefits are. If the benefits are measured as improved recreational opportunities on certain lakes and rivers, the numbers are going to be much lower. In addition, it seems that a number of the studies of benefits of cleaner surface water don\’t take into account improvements in property values or work conditions from being close to cleaner water.  Benefits like biodiversity from cleaner water may also be underestimated. As Keiser and Shapiro write: \”Most existing benefits of surface water quality are believed to come from recreation, but available data on recreation are often geographically limited (for example, one county, state, or lake) and often come from a single cross section. Hence, our subjective perception is that underestimation of benefits is more likely a concern for surface water quality regulation than for other regulations.\”

3) A wide range of evidence has shown that market-based environment regulation–like using cap-and-trade arrangements or pollution taxes–can be a much cheaper way to achieve a given amount of environmental cleanup. However, it\’s often harder to figure out how to use a system of, say, tradeable pollution permits to clean up surface water. As a result, the costs in these benefit-cost calculations may be higher than necessary. However, use of these more flexible tools in surface water clean-up is rising: for example, there is a Chesapeake Bay Watershed Nutrient Credit Exchange and a
Minnesota River Basin Trading market. For a discussion from a few years ago about these issues, a good starting point is Karen Fisher-Vanden and Sheila Olmstead. 2013. \”Moving Pollution Trading from Air to Water: Potential, Problems, and Prognosis.\” Journal of Economic Perspectives, 27 (1): 147-72.

4) Even taking the possibilities that reducing water pollution has greater benefits than currently estimated,  and potentially might have lower costs, a general uneasiness that the benefit-cost ratios are lower than other forms of environmental protection remains.

There are some big issues about water pollution on the horizon. For example the original clean water laws back in the early 1970s pretty much skipped over agriculture, but in many parts of the country agricultural runoff is the single biggest surface water pollution problem.

There\’s also a big dispute going on over what is meant in the Clean Water Act by the phrase \”Waters of the United States.\” It\’s clear that this includes rivers and lakes. But what about wetlands, headwater areas that drain into rivers and lakes, or streams that come and go seasonally? As the authors write:

Another challenge involves the language of the Clean Water Act protecting “Waters of the United States,” which has led to legal debates over how this term applies to roughly half of US waters, primarily composed of wetlands, headwaters, and intermittent streams. Two Supreme Court decisions held that the Clean Water Act does not protect most of these waters (Rapanos v. United States, 547 US 715 [2006]; Solid Waste Agency of Northern Cook County (SWANCC) v. US Army Corps of Engineers, 531 US 159 [2001]). In 2015, the Obama administration issued the Waters of the United States Rule, which sought to reinstate these protections. However, in 2017, President Trump issued an executive order to rescind or revise this rule. The net benefits of these regulations have also become controversial …

Keiser and Shapiro also point out that there is a LOT more economics research about air quality than water quality, perhaps in part because the Environmental Protection Agency collects and makes available copious data on air pollution, while data on water pollution is collected more sporadically and divided up in many places. They point out a number of ways in which data related to water pollution is becoming more complete and available. But matching up the very local steps to reduce water pollution with the very local effects of water pollution, and then tracing water pollution through the natural hydrogeography, remains in many ways a work in progress.

The Declining Share of Veterans Among Prime-Age Men: The Centennial of Armistice Day

The armistice marking the end of World War I was signed on November 11, 1918. A year later–and 100 years ago today–the first Armistice Day celebrations were held at Buckingham Palace. The US Congress passed a resolution commemorating Armistice Day in 1926, and it became a national holiday in 1938. In 1954, after World War II, its name was changed to Veteran\’s Day in the United States.

But as you may have noticed when attending an event where veterans are encouraged to stand and be recognized for their service, the share of \”prime-age\” men (a term economists use to describe those in main working years from ages 25-54) who served as veterans has been in sharp decline.

Coile, Courtney C. Coile and Mark G. Duggan raise this issue in passing in a Spring 2019 essay in the Journal of Economic Perspectives (\”When Labor\’s Lost: Health, Family Life, Incarceration, and Education in a Time of Declining Economic Opportunity for Low-Skilled Men,\” 33:2, 191-210). They write:

[W]e call attention to perhaps the most significant change among prime-age men in recent decades. In 1980, fully 45 percent of prime-age men reported in the Bureau of Labor Statistics’ monthly Current Population Survey said that they had previously served in the military. This number steadily declined during the next 36 years and stood at just 10 percent by 2016 in this same survey.

Of course, a major reason for this shift is because of the end of the military draft in 1973, a change in which the arguments and projections of economists about how an all-volunteer military force could function played a substantial role (for a background essay, see John T. Warner and Beth J. Asch, \”The Record and Prospects of the All-Volunteer Military in the United States,\” Journal of Economic Perspectives, Spring 2001, 15:2,  pp. 169-192).

What do we know about the effects of this dramatic social change? Coile and Duggan write:

Much of the economics literature has examined the effect of military service by using plausibly exogenous variation in the likelihood of service driven by one’s draft lottery number (Angrist 1990). This research has tended to find quite modest long-term effects of military service on employment, earnings, and health status (for example, Angrist, Chen, and Frandsen 2010; Angrist, Chen, and Song 2011). However, these studies are unable to capture the peer effects or general equilibrium effects of military service. Recent research has suggested substantial gains to cognitive and noncognitive skills stemming from military service (Spiro, Stetterson, and Aldwin 2015) and associated benefits such as the GI bill. Overall, we see a strong need for further work to investigate how changing economic opportunities, declines in military service, and other factors are contributing to or cushioning the problems of low-skilled prime-age men.

This shift away from shared military experience is a large and probably understudied social shift. Many of those who served in the armed forces, and survived, have lasting personal ties both to those they knew and to others who shared the experience.

My suspicion is that the effects of military service in later life are probably quite different between the days of the draft and the all-volunteer force. For example, during the draft pay could be relatively low and there was not much reason for the armed forces to invest in the human capital of new soldiers, most of whom would be out of military service in a few years. With the all-volunteer force, pay had to be somewhat higher and the armed forces had to focus on training and incentives for retention. When big US companies need a new CEO, they can do a job search outside their own firm. But when the armed forces needs a new general or admiral, they have to promote from within.

There are occasional proposals for a national service requirement, proposals which in their rhetoric sometimes piggyback on the strong positive feelings many of us have about veterans. But I\’m old enough to have grown up in the aftermath of the Vietnam-era military draft, and it would be a dramatic understatement to say that the draft was unpopular.  My own cynical observation about national service proposals is that it\’s a case of middle-aged and elderly people voting on which young adults are eligible for exceptions and loopholes, and how the others will be required to spend a couple of years of their lives in low-cost labor. In an odd way, I\’d be marginally more sympathetic to a national service proposal requiring, say, that everyone between the ages of 30 and 50 needs to take two years out of their life for full-time, low-wage labor. After all, wouldn\’t these ages be potentially an even more productive time to \”foster unity\” and \”build bridges\” and \”bring people together,\” and all the other claims made for a national service requirement? Or perhaps members of Congress could require that they personally each spend one month out of every two years in a full-time, away-from-home national service requirement.

US Dependence on Imported Minerals

This figure shows the US reliance on imports for various minerals, from the US Geological Survey. I\’m fully aware that minerals are not equally distributed around the world, and I\’m a pro-trade guy, so I won\’t lose sleep tonight about these numbers. But during waking hours, I will wonder about whether the supplies from other countries are reasonably steady and reliable. I\’ll also wonder about whether global pollution is worse because US firms are importing minerals from countries with substantially lower environmental standards than the United States.
2018 US Net Import Reliance

Minimum Wages and Overtime Rules

Perhaps the best-known provision of the Fair Labor Standards Act (FLSA) of 1938 is that it set a federal minimum wage for the first time. In addition, this is the law that established the overtime rle that if you are a \”nonexempt\” work–which basically means a worker paid by the hour rather than on a salary–then if you work more than 40 hours/week you must be paid time-and-a-half for the additional hours. 

Charles C. Brown and Daniel S. Hamermesh take a look at the evidence on both provisions in \”Wages and Hours Laws: What Do We Know? What Can Be Done?\” (Russell Sage Foundation Journal of the Social Sciences, December 2019, 5:5, pp.  68-87). They write:

Although wages and hours are regulated under the same law, policy developments and research on the law’s impacts could not be more different between the two areas. The federal minimum wage has been raised numerous times; and many subfederal jurisdictions impose their own wage minima that, where they exceed the federal minimum, supersede it. Perhaps because of this variation, a huge literature examining the effects of minimum wages on the U.S. labor market has arisen and has continued to burgeon. A fair conclusion is that American labor economists have spilled more ink per federal budgetary dollar on this topic than on any other labor-related policy. The opposite is the case for regulating hours. The essential parameters of hours regulation have not changed since passage of the act; and perhaps because of this, the dearth of research on the economic impact of hours regulation in the United States, especially recently, is remarkable.

(In the shade of these parentheses, I\’ll also mention this issue of the the RSF journal, edited by Erica L. Groshen and Harry J. Holzer, is especially rich in content, including 10 articles on the general theme of \”Improving Employment and Earnings in Twenty-First Century Labor Markets.\” I\’ll list the Table of Contents for the issue, with links to the articles, at the bottom of this post.)

Minimum Wages

The US minimum wage situation has changed dramatically in the last decade or so in a particular way: a much larger share of workers live in states with a minimum wage above the federal level. Brown and Hamermesh write:

Over the past thirty years, however, states’ decisions to increase their minimum wages have become increasingly important given that the federal minimum has changed less frequently. For example, in 2010 (after the 2007 federal increases had become fully effective) only one-third of the workforce was in states with state minima that exceeded the federal $7.25. By 2016, with the federal minimum still at $7.25, that fraction had risen to nearly two-thirds. As of 2018, twenty-nine states … had minimum wages above $7.25. States that have raised their minimum wages above the federal minimum have tended to be high-wage states, and the result has been a minimum wage much more closely (though still imperfectly) aligned with local wages.

 Brown and Hamermesh focus on the studies that try to estimate the effects of a minimum wage by looking at these differences in minimum wages that have arisen across states (leaving the issues involved in studying city-level minimum wages for another day). Here are some of the points they make: 

There are basically three ways to take advantage of the state-level changes and variations in minimum wage: comparisons between states; comparisons between border counties of states; and comparisons with states and \”synthetic\” control groups, which basically means finding a combination of other areas that had economic patterns to a certain state before the minimum wage was changed.

When doing these comparisons, a researcher will want to adjust for other factors that might affect state economies: for example, a natural disaster that hit one state but not another, or a change in the price of oil would affect an oil-producing state. A researcher can allow for each state or border-county to be following its own time trend, or for the effect of the minimum wage on employment to be different in every state. Is the relationship between a changing minimum wage and employment a straight line or a curved line–and if it\’s a curved line, how curved is it? The more variables like this you include, the smaller the effect of a minimum wages on employment is likely to be. There is considerable disagreement and controversy over what variables should be included.

It\’s been typical in many of these studies to focus on either teenagers or restaurant workers, because they are both groups that are presumably affected by the minimum wage.

A common finding is that a rise in the minimum wage of 10% raises the wages of teenagers as a group or restaurant workers as a group by about 2%–presumably because some teenagers or restaurant workers were already earning more than the minimum wage and thus weren\’t affected.

Estimates of the effect of a raising the minimum wage on either employment of teenagers or restaurant workers are all over the place, depending on exactly how the estimation process is done, usually \”small\”–which in this case means \”small enough that the earnings gains caused by a minimum wage increase are only partially offset by employment losses.\”

Of course, showing that past minimum wage increases had small effects in reducing employment doesn\’t prove that additional minimum wage increases would also have small effects. The usual belief of economists is that the effects of a rising minimum wage on employment would be small up to some point, but then start getting larger. That point is likely to vary according across states–which is why it makes some sense to have a different minimum wage across states.

At least one recent study has tried to focus on workers age 16-25 who have not completed high school–rather than teenagers in general. There some evidence a higher minimum wage might have a bigger effect on low-education workers in particular, rather than looking at teenagers or restaurant workers.

It\’s plausible that the effects of a higher minimum wage on employment might be larger in the long-term. For example, perhaps a firm doesn\’t fire anyone when the minimum wage rises, but instead just slow down on hiring. Or perhaps a minimum wage causes certain kinds of firms to be more likely to exit the market over time, or less likely to enter, or more likely to invest in labor-saving technology. Some studies have found support for these effects; others have not.

For some complementary discussion of the evidence on raising minimum wages in previous posts, see:

Overtime Rules

In contrast to minimum wage laws, overtime rules haven\’t changed much over time. Brown and Hamermesh write: \”In the eighty years since the FLSA was enacted, the specification of its crucial parameters regulating hours—a penalty rate of 50 percent extra wages on hours beyond the standard weekly hours (HS) of forty—has not changed.\” Maybe the main way it has come up in recent policy disputes is when laws are proposed that employers should be able to give \”comp time\” for overtime work, meaning extra vacation time, instead of paying higher wages. 

But a big change in the overtime rules has been happening in a subtle way. Back in the mid-1970s, the rule was that a salaried worker had to be paid at least $455/week to be exempt from the requirement to get paid time-and-a-half for overtime. But that $455/week hasn\’t been changed since then, even though it\’s value has been eaten away by inflation. Brown and Hamermesh calculate that $455/week was about double the median weekly earnings in the US economy back in the mid-1970s; now, it\’s about 50% of median weekly earnings. 
To put this another way, it used to be that you had to be earning a salary of double the typical weekly earnings before you were exempt from overtime rules. Now, you can be paid a much lower salary, half of typical weekly earnings, and you are still exempt from the overtime rules. The rules requiring overtime pay thus have gradually come to apply to many fewer workers over time. The Obama administration tried to raise the limit to $913/week by using an administrative rule, but the courts held (reasonably enough, in my view) that this kind of decision needed to be made by Congress passing a law. Apparently the Trump administration has now proposed raising the limit to $679/week.
What would happen if the rules were changed so that dramatically more workers needed to be paid overtime for working more than 40 hours/week? Presumably, some of these workers would get paid overtime, but in addition, employers would try to reduce the number of workers who ended up above that weekly limit. Brown and Hamermesh run through various calculations and look at some international evidence. They write: \”We can conclude that increasing the exempt limit would have raised some salaried workers’ earnings and reduced their weekly hours. One exercise suggested that 12.5 million workers would have been affected …\” 
The effects of changing the rules so that more workers are eligible for overtime pay aren\’t enormous. Still, for workers who are being paid salaries below the median weekly wage, and thus aren\’t eligible for overtime, it could be a meaningful gain. They write:

If we are interested in spreading work among more people and removing the United States from its current position as the international champion among wealthy countries in annual work time per worker, minor tinkering with current overtime laws will do little. We might borrow from some of the panoply of European mandates that alter the amount and timing of work hours. Among these are penalties for work on weekends, evenings, and nights and limits on annual overtime hours, while lengthening the accounting period for overtime beyond the current single week. If our goal is to spread work and make for a more relaxed society, these changes will help but their effects will also be small.

____________

The Roundup Case: Problems with Implementing Science-Based Policy

Imagine, just for the sake of argument, that you are open-minded about the question of whether the weed-killer Roundup (long produced by Monsanto, which was recently acquired by Bayer AG) causes cancer. You want to make a decision based on scientific evidence. However, you aren\’t a scientist yourself, and you don\’t feel competent at trying to read scientific studies.

Geoffrey Kabat asks \”Who’s Afraid of Roundup?\” in the Fall 2019 issue of Issues in Science and Technology. More broadly, he uses the controversy over Roundup as a way to ask about the role of science in decision-making.

When it comes to Roundup and its active ingredient glyphosate, the Environmental Protection Agency has continually said that \”there are no risks to public health when glyphosate is used in accordance with its current label and that glyphosate is not a carcinogen.\” As Kabat points out:

The US Environmental Protection Agency’s recent assessment is only the latest in a succession of reports from national regulatory agencies, as well as international bodies, that support the safety of glyphosate. These include Health Canada, the European Food Safety Authority (EFSA), the European Chemicals Agency, Germany’s Federal Institute for Risk Assessment, and the Food and Agriculture Organization of the United Nations, as well as health and regulatory agencies of France, Australia, New Zealand, Japan, and Brazil.

But just when you find yourself deeply relieve that the experts have reached a consensus, you ind that one agency disagrees. In 2015, International Agency for Research on Cancer listed glyphosate is a “probable carcinogen.” There are lots of reasons to be dubious about the IARC decision, and to believe the consensus of all the other agencies around the world, and Kabat runs through quite a list. Here are a few of his points:

  • Unlike the other health-and-safety agencies, the IARC ignores the size of the dose. Thus, for example, when IARC evaluated 500 agents and chemicals while ignoring the size of the dose, it found that 499 of them were possible carcinogens. Other agencies take the dose into account.
  • The IARC evaluation looked only at certain parts of some studies of how glyphosate affected rodents. Reanalysis of the same studies found that the \”IARC Working Group that conducted the assessment selected a few positive results in one sex and used an inappropriate statistical test to declare some tumor increases significant.\”
  • There is a major study funded by the National Cancer Institute looking at 54,000 pesticide applicators in Iowa and North Carolina. \” Indeed, when the results for glyphosate and cancer incidence  … were finally published in the Journal of the National Cancer Institute, in 2018, the paper reported no significant increases …\” 
  • A key scientist for the IARC process both led the way in designating glyphosate as a substance to be studied and in writing the IARC report. Then two weeks after the report came out, this scientist \”signed a lucrative contract to act as a litigation consultant with a law firm—Lundy, Lundy, Soileau, and South—engaged in bringing lawsuits against Monsanto for Roundup exposure.\”

In a bigger picture sense, the actual science over Roundup and glyphosate becomes almost irrelevant  to the public disputes. The scientific question of whether glyphosate is a carcinogen is treated as identical to the question of whether one is anti-pesticide, anti-genetic modification, and anti-Big Agriculture.

The result is  what the head of the European Food Safety Authority called \”the Facebook age of science.\” As background, the European agencies are well-known for their willingness to invoke the \”precautionary principle\”–basically, if we aren\’t sure and it might cause a problem, we should prohibit it. In this spirit, a group of almost 100 scientists wrote to EFSA to complain about their decision allowing glyphosate. Here\’s how Bernhard Url, the head of EFSA, responded:

You have a scientific assessment, you put it on Facebook, and you count how many people ‘like’ it. For [EFSA], this is no way forward. We produce a scientific opinion, we stand for it, but we cannot take into account whether it will be liked or not. … People that have not contributed to the work, that have not seen the evidence most likely, that have not had the time to go into the detail, that are not in the process, have signed a letter of support [for a ban on glyphosate]. Sorry to say that, for me, with this you leave the domain of science, you enter into the domain of lobbying and campaigning. And this is not the way EFSA goes.

Roundup is of course just one product, but the issue of how science will be used in public policy is of course much broader. For example, if a lawsuit alleges that Roundup causes cancer, the truth of that accusation presumably matters. As Kabat points out, it \”should come as no surprise that the same factors that are at work here are at work in many other areas, whether electromagnetic fields, cell phone `radiation,\’ so-called endocrine disrupting chemicals, numerous aspects of diet, cosmetic talc, GMOs, vaccines, nuclear power, or climate change.\”

In my own contentious way, I find it especially interesting when people make strong appeals to a  scientific consensus in one area, but then dismiss it in other areas. For example, those who  believe that action should be taken to reduce greenhouse gas emissions sometimes accuse their opponents of denying \”the science.\” But on occasion, it then turns out that those who wrap themselves in the mantle of \”the science\” when it comes to climate change turn out to oppose vaccinations or Roundup. The idea of whether to build the Keystone XL oil pipeline across Canada and into the United States went through multiple environmental reviews during the Obama administration, each one finding it would not have a negative effect. For those protesting the pipeline, like for those writing group letters to the European regulators about glyphosate, the \”science\” was only acceptable if it supported their prior beliefs.

One of my favorite examples about the \”science\” and popular beliefs involves the irradiation of food. For a quick overview, Tara McHugh describes \”Realizing the Benefits of Food Irradiation\” in the September 2019 issue of Food Technology Magazine. As she notes, the Food and Drug Administration recently approved irradiation for fresh fruits and vegetables, and it had already been approved for a range of other food products. McHugh writes:

The global food irradiation market was valued at $200 million in 2017 and was projected by Coherent Market Insights to grow at a 4.9% combined annual growth rate from 2018 to 2026. This projects the market size to rise to $284 million by 2026. This high growth rate was envisioned due to increased consumer acceptance since the U.S. Food and Drug Administration (FDA) approved phytosanitary treatment of fresh fruits and vegetables by irradiation. The food irradiation market in Asia is also growing very rapidly owing to approval of government agencies in India and other countries. Presently over 40 countries have approved applications to irradiate over 40 different foods. More than half a million tons of food is irradiated around the globe each year. About a third of the spices and seasonings used in the United States are irradiated.

It would be interesting to see a Venn diagram showing how many of those who believe in \”the science\” when it comes to climate change also believe in \”the science\” when it comes to the safety of Roundup, vaccinations, or irradiating food. Or perhaps there is a human cognitive bias which is more prone to believe \”the science\” when it warns of danger, but less likely to believe \”the science\” when it tells us that something we believe to be dangerous (or something that we oppose on other grounds) is actually safe. 

Flexible vs. Deep: What are the Ties that Bind a Firm Together?

One of the classic questions in economics is about what determines what is inside or outside a company: that is, why do companies buy some inputs or higher some workers from outside their firm through market transactions, but hire other workers to produce certain inputs inside the firm? Economists will recognize this as the central question posed by Roland Coase (Nobel \’91) in his famous 1937 essay, \”The Nature of the Firm\” (Economica, November 1937, pp. 386-405). Coase points out that economic activity within firms is coordinated by conscious administrative action, while economic activity between firms is coordinated by supply and demand. In one passatge that always makes me smile, Coase writes (footnotes omitted):

As D. H. Robertson points out, we find \”islands of conscious power in this ocean of unconscious co-operation like lumps of butter coagulating in a pail of buttermilk.” But in view of the fact that it is usually argued that co-ordination will be done by the price mechanism, why is such organisation necessary? Why are there these “islands of conscious power”? Outside the firm, price movements direct production, which is co-ordinated through a series of exchange transactions on the market. Within a firm, these market transactions are eliminated and in place of the complicated market structure with exchange transactions is substituted the entrepreneur-co-ordinator, who directs production. It is clear that these are alternative methods of co-ordinating production.

The line between what activities are coordinate more effectively by administrative action inside a firm and what is coordinated more effectively by market actions between firms shifts over time, and across different types of firms. One current example is the number of firms that sell manufactured goods but are \”factoryless\”–that is, they don\’t own or manage the factory in which their goods are produced. Diane Coyle and David Nguyen offer some recent examples in \”\”No plant, no problem? Factoryless manufacturing and economic measurement\” (ESCoE Discussion Paper 2019-15 September 2019). In a short overview of that paper, they write:

Did you know that Mercedes does not actually produce its heavy-duty G-Class? To be fair, it does keep design, development and marketing of the SUV in-house, but the vehicle is entirely built in the factory of Magna Steyr, a contract manufacturer based in Graz, Austria. In the same plant one will also find entire production lines for the Jaguar I-Pace and E-Pace, as well as BMW’s Series 5.

Coyle and Nguyen are focused on the question of how to measure \”output\” of an economy if design, development, and marketing are in one place, but the physical production intimately inked to design development and marketing is in another. For a previous discussion of factoryless manufacturing, in the US economy, see \”Factoryless Goods Producing Firms\” (May 16, 2015).

In thinking about these shifting lines between  what is produced inside and outside a company, I was i intrigued by the Edward Tenner\’s discussion of \”a long-developing tension between two iconic corporate models: the flexible organization and the deep organization\” in \”The 737 MAX And the Perils of the Flexible Corporation\” (Milken Institute Review, Fourth Quarter 2019, pp. 36-49). Tenner describes the difference in this way:

Depth is not just a matter of corporate size or scale. It is an attitude of public responsibility. Executives of a deep organization may strive for the highest possible profits — but only in the context of a perceived essential role in the social order. The gospel of flexibility, rooted in business-school doctrines of the primacy of shareholder value … seeks to preserve freedom of short-term optimization and cost reduction. By contrast, the gospel of depth has been mainly a tacit one, based on the idea that a dominant organization has a distinct role in the social order. It seeks to serve multiple stakeholders, to provide safety and security to consumers even if it raises costs and to plan for its long-term future. Deep organizations have often subscribed to what has been called welfare capitalism, providing impressive health, educational and recreation services for employees, expecting exceptional loyalty and higher productivity in return. Many, though not all, deep organizations have government ties and semi-official roles. John D. Rockefeller’s Standard Oil was not a deep organization in this sense; AT&T before the breakup of Ma Bell was.

A deep organization has extraordinary in-house capabilities, managed administratively inside the firm. A flexible organization is more likely to add a mixture of shifts in locations or outside contractors where this seems to increase efficiency and to raise profits. 

Tenner uses Boeing as an example of a shift from a \”deep\” to a \”flexible\” organization. As an illustration from the days of a \”deep\” Boeing, he tells the story of the launch of the 747 back in 1969:

In 1968, construction of the first 747 from scratch, a plane that was radically different than any existing aircraft, was completed in only 29 months. Boeing had to build an entirely new factory (the world’s largest) to produce it. But the company already had the asset on hand that mattered most, a staff of some 50,000 experienced engineers, technicians and managers known within the company as “the Incredibles.” In contrast to the trial and error of many engineering projects, the 747 was completed with such remarkable precision that the head of the project, Joe Sutter, could predict exactly where on the runway the plane would take off — and test pilots lauded its handling from Day 1. 

A deep organization, like Boeing in the 1960s and 1970s, has not only a reservoir of professional skills, but also an esprit de corps that can resist policies that threaten the mission. At one point in the project, Boeing senior management wanted to cut 1,000 of the 4,500 engineers assigned to the 747. But at the risk of his own job, Sutter walked out of the meeting at which the proposal was made; Sutter prevailed.

But over time, Boeing shifted toward flexibility. It moved its headquarters from Seattle to Chicago. It opened a major production facility in South Carolina. There were strong arguments for these and other changes, but there were also tradeoffs in terms of in-house expertise and cohesiveness. And when it came time to design and build the 737 MAX, two of which crashed earlier this year. There are typically multiple reasons for plane crashes, and this is no exception, But some of the reasons that have been proposed in media reports include problems with manufacturing in the South Carolina facility \”including manufacturing debris left in finished aircraft,\” \”some key software of the Boeing 737 MAX had been developed by $9-an-hour programmers outsourced abroad,\” and \”yhe FAA’s delegation of some essential monitoring tasks to Boeing employees.\”

As Teller argues, we are in an age of flexibility gospel, and the advantages of flexibility in many contexts are very real. However, Teller is pushing back, gently, pointing out that deep organizations have their benefits, too.

For example, deep organizations often had in-house corporate research laboratories, which often proved remarkably fertile places for the interaction of cutting-edge science and real-world production issues.  And sometimes those corporate research labs produced extraordinarily important spinoff innovations like the transistor or the laser, or even the idea of the relational database.

Deep organizations sometimes provided quality that was so high it was sometimes redundant. In the case of AT&T during its days as a deep organization before it was broken up in 1984, Teller notes:

And, in the 1950s, handsets designed by the renowned Henry Dreyfuss and manufactured by its own Western Electric subsidiary, were rated to stand decades of punishing use. One result was the Bell System’s astonishing reliability rate of 99.999 percent call completion. In 2011, a senior Google executive acknowledged that the web had yet to achieve reliability even remotely as high.

Products that seem overengineered can look like a place for cost-cutting, but the lessons learned from producing in this way can be valuable–and when it comes to airplane manufacturing, overengineering and redundancy in the name of safety seems important. 
There is also a certain kind of deep expertise that can be developed by in-house professonals. Teller writes:

In 1989, the information scientist Michael J. Prietula and the Nobel economist Herbert A. Simon published an article in the Harvard Business Review, “The Experts in Your Midst,” on the wealth of knowledge and capabilities that underappreciated specialists in an organization possess. Theoretically, a lean, agile company might try to substitute outside consultants and temporary workers for in-house talent. Yet because in-house professionals bring years of tacit knowledge to problems, they paradoxically may be better equipped to find solutions than experts unfamiliar with the organization’s workings. The sociologist Chandra Mukerji has even argued that the government sponsors academic oceanography generously not because its findings are directly applicable to, say, Navy operations, but because its support creates a reserve army of scientific experts for urgent needs.

There\’s no single \”right\” answer to whether a company should be flexible or deep. Factoryless may work just fine for some firms. Deep in-house expertise may be important to others. Teller cites Google as a leading example of a modern \”deep\” firm. He writes:

From the perspective of 2020, the state of deep organization in American industry is not as discouraging as it appeared a generation earlier. Google, still an obscure academic startup in the late 1990s, is the old AT&T’s successor as a deep organization, hegemonic in the new field of online search as the Bell System had been in telecommunications. In 2017, fully 16 percent of Google employees held PhD degrees, over three times the proportion at Microsoft, Apple and Amazon. If a measure of a deep organization is its efforts to plan the future privately, it is hard to think of any 20th-century corporation’s plans as ambitious (and controversial) as those of a Google subsidiary for creating a network-controlled smart city in Toronto.

Although it\’s not one of Teller\’s themes, I would also add that the flexible corporation has strong incentives for how it invests in the human capital of its workers. When a deep firm expects many of its workers to stay for a substantial period of time, it has an incentive to invest in their skills and training, to view them as candidates for future promotions, to encourage them to build up specific knowledge about the company, and in general to envision the company as made up of people who are building longer-term careers at that company. When a flexible firm buys from outside, it doesn\’t need to care much about those who work for its suppliers. The notion of firm and employee making investments in a long-term career is diminished. Workers instead need to think semi-continually about where their next job will be, and firms need to think semi-continually about hiring from outside to replace departing workers. The notion that many workers will eventually settle into a career progression of rising skills and pay with a single employer comes to seem outdated and obsolete. That, too, is a tradeoff of the flexible firm.