The Declining Share of Veterans Among Prime-Age Men: The Centennial of Armistice Day

The armistice marking the end of World War I was signed on November 11, 1918. A year later–and 100 years ago today–the first Armistice Day celebrations were held at Buckingham Palace. The US Congress passed a resolution commemorating Armistice Day in 1926, and it became a national holiday in 1938. In 1954, after World War II, its name was changed to Veteran\’s Day in the United States.

But as you may have noticed when attending an event where veterans are encouraged to stand and be recognized for their service, the share of \”prime-age\” men (a term economists use to describe those in main working years from ages 25-54) who served as veterans has been in sharp decline.

Coile, Courtney C. Coile and Mark G. Duggan raise this issue in passing in a Spring 2019 essay in the Journal of Economic Perspectives (\”When Labor\’s Lost: Health, Family Life, Incarceration, and Education in a Time of Declining Economic Opportunity for Low-Skilled Men,\” 33:2, 191-210). They write:

[W]e call attention to perhaps the most significant change among prime-age men in recent decades. In 1980, fully 45 percent of prime-age men reported in the Bureau of Labor Statistics’ monthly Current Population Survey said that they had previously served in the military. This number steadily declined during the next 36 years and stood at just 10 percent by 2016 in this same survey.

Of course, a major reason for this shift is because of the end of the military draft in 1973, a change in which the arguments and projections of economists about how an all-volunteer military force could function played a substantial role (for a background essay, see John T. Warner and Beth J. Asch, \”The Record and Prospects of the All-Volunteer Military in the United States,\” Journal of Economic Perspectives, Spring 2001, 15:2,  pp. 169-192).

What do we know about the effects of this dramatic social change? Coile and Duggan write:

Much of the economics literature has examined the effect of military service by using plausibly exogenous variation in the likelihood of service driven by one’s draft lottery number (Angrist 1990). This research has tended to find quite modest long-term effects of military service on employment, earnings, and health status (for example, Angrist, Chen, and Frandsen 2010; Angrist, Chen, and Song 2011). However, these studies are unable to capture the peer effects or general equilibrium effects of military service. Recent research has suggested substantial gains to cognitive and noncognitive skills stemming from military service (Spiro, Stetterson, and Aldwin 2015) and associated benefits such as the GI bill. Overall, we see a strong need for further work to investigate how changing economic opportunities, declines in military service, and other factors are contributing to or cushioning the problems of low-skilled prime-age men.

This shift away from shared military experience is a large and probably understudied social shift. Many of those who served in the armed forces, and survived, have lasting personal ties both to those they knew and to others who shared the experience.

My suspicion is that the effects of military service in later life are probably quite different between the days of the draft and the all-volunteer force. For example, during the draft pay could be relatively low and there was not much reason for the armed forces to invest in the human capital of new soldiers, most of whom would be out of military service in a few years. With the all-volunteer force, pay had to be somewhat higher and the armed forces had to focus on training and incentives for retention. When big US companies need a new CEO, they can do a job search outside their own firm. But when the armed forces needs a new general or admiral, they have to promote from within.

There are occasional proposals for a national service requirement, proposals which in their rhetoric sometimes piggyback on the strong positive feelings many of us have about veterans. But I\’m old enough to have grown up in the aftermath of the Vietnam-era military draft, and it would be a dramatic understatement to say that the draft was unpopular.  My own cynical observation about national service proposals is that it\’s a case of middle-aged and elderly people voting on which young adults are eligible for exceptions and loopholes, and how the others will be required to spend a couple of years of their lives in low-cost labor. In an odd way, I\’d be marginally more sympathetic to a national service proposal requiring, say, that everyone between the ages of 30 and 50 needs to take two years out of their life for full-time, low-wage labor. After all, wouldn\’t these ages be potentially an even more productive time to \”foster unity\” and \”build bridges\” and \”bring people together,\” and all the other claims made for a national service requirement? Or perhaps members of Congress could require that they personally each spend one month out of every two years in a full-time, away-from-home national service requirement.

US Dependence on Imported Minerals

This figure shows the US reliance on imports for various minerals, from the US Geological Survey. I\’m fully aware that minerals are not equally distributed around the world, and I\’m a pro-trade guy, so I won\’t lose sleep tonight about these numbers. But during waking hours, I will wonder about whether the supplies from other countries are reasonably steady and reliable. I\’ll also wonder about whether global pollution is worse because US firms are importing minerals from countries with substantially lower environmental standards than the United States.
2018 US Net Import Reliance

Minimum Wages and Overtime Rules

Perhaps the best-known provision of the Fair Labor Standards Act (FLSA) of 1938 is that it set a federal minimum wage for the first time. In addition, this is the law that established the overtime rle that if you are a \”nonexempt\” work–which basically means a worker paid by the hour rather than on a salary–then if you work more than 40 hours/week you must be paid time-and-a-half for the additional hours. 

Charles C. Brown and Daniel S. Hamermesh take a look at the evidence on both provisions in \”Wages and Hours Laws: What Do We Know? What Can Be Done?\” (Russell Sage Foundation Journal of the Social Sciences, December 2019, 5:5, pp.  68-87). They write:

Although wages and hours are regulated under the same law, policy developments and research on the law’s impacts could not be more different between the two areas. The federal minimum wage has been raised numerous times; and many subfederal jurisdictions impose their own wage minima that, where they exceed the federal minimum, supersede it. Perhaps because of this variation, a huge literature examining the effects of minimum wages on the U.S. labor market has arisen and has continued to burgeon. A fair conclusion is that American labor economists have spilled more ink per federal budgetary dollar on this topic than on any other labor-related policy. The opposite is the case for regulating hours. The essential parameters of hours regulation have not changed since passage of the act; and perhaps because of this, the dearth of research on the economic impact of hours regulation in the United States, especially recently, is remarkable.

(In the shade of these parentheses, I\’ll also mention this issue of the the RSF journal, edited by Erica L. Groshen and Harry J. Holzer, is especially rich in content, including 10 articles on the general theme of \”Improving Employment and Earnings in Twenty-First Century Labor Markets.\” I\’ll list the Table of Contents for the issue, with links to the articles, at the bottom of this post.)

Minimum Wages

The US minimum wage situation has changed dramatically in the last decade or so in a particular way: a much larger share of workers live in states with a minimum wage above the federal level. Brown and Hamermesh write:

Over the past thirty years, however, states’ decisions to increase their minimum wages have become increasingly important given that the federal minimum has changed less frequently. For example, in 2010 (after the 2007 federal increases had become fully effective) only one-third of the workforce was in states with state minima that exceeded the federal $7.25. By 2016, with the federal minimum still at $7.25, that fraction had risen to nearly two-thirds. As of 2018, twenty-nine states … had minimum wages above $7.25. States that have raised their minimum wages above the federal minimum have tended to be high-wage states, and the result has been a minimum wage much more closely (though still imperfectly) aligned with local wages.

 Brown and Hamermesh focus on the studies that try to estimate the effects of a minimum wage by looking at these differences in minimum wages that have arisen across states (leaving the issues involved in studying city-level minimum wages for another day). Here are some of the points they make: 

There are basically three ways to take advantage of the state-level changes and variations in minimum wage: comparisons between states; comparisons between border counties of states; and comparisons with states and \”synthetic\” control groups, which basically means finding a combination of other areas that had economic patterns to a certain state before the minimum wage was changed.

When doing these comparisons, a researcher will want to adjust for other factors that might affect state economies: for example, a natural disaster that hit one state but not another, or a change in the price of oil would affect an oil-producing state. A researcher can allow for each state or border-county to be following its own time trend, or for the effect of the minimum wage on employment to be different in every state. Is the relationship between a changing minimum wage and employment a straight line or a curved line–and if it\’s a curved line, how curved is it? The more variables like this you include, the smaller the effect of a minimum wages on employment is likely to be. There is considerable disagreement and controversy over what variables should be included.

It\’s been typical in many of these studies to focus on either teenagers or restaurant workers, because they are both groups that are presumably affected by the minimum wage.

A common finding is that a rise in the minimum wage of 10% raises the wages of teenagers as a group or restaurant workers as a group by about 2%–presumably because some teenagers or restaurant workers were already earning more than the minimum wage and thus weren\’t affected.

Estimates of the effect of a raising the minimum wage on either employment of teenagers or restaurant workers are all over the place, depending on exactly how the estimation process is done, usually \”small\”–which in this case means \”small enough that the earnings gains caused by a minimum wage increase are only partially offset by employment losses.\”

Of course, showing that past minimum wage increases had small effects in reducing employment doesn\’t prove that additional minimum wage increases would also have small effects. The usual belief of economists is that the effects of a rising minimum wage on employment would be small up to some point, but then start getting larger. That point is likely to vary according across states–which is why it makes some sense to have a different minimum wage across states.

At least one recent study has tried to focus on workers age 16-25 who have not completed high school–rather than teenagers in general. There some evidence a higher minimum wage might have a bigger effect on low-education workers in particular, rather than looking at teenagers or restaurant workers.

It\’s plausible that the effects of a higher minimum wage on employment might be larger in the long-term. For example, perhaps a firm doesn\’t fire anyone when the minimum wage rises, but instead just slow down on hiring. Or perhaps a minimum wage causes certain kinds of firms to be more likely to exit the market over time, or less likely to enter, or more likely to invest in labor-saving technology. Some studies have found support for these effects; others have not.

For some complementary discussion of the evidence on raising minimum wages in previous posts, see:

Overtime Rules

In contrast to minimum wage laws, overtime rules haven\’t changed much over time. Brown and Hamermesh write: \”In the eighty years since the FLSA was enacted, the specification of its crucial parameters regulating hours—a penalty rate of 50 percent extra wages on hours beyond the standard weekly hours (HS) of forty—has not changed.\” Maybe the main way it has come up in recent policy disputes is when laws are proposed that employers should be able to give \”comp time\” for overtime work, meaning extra vacation time, instead of paying higher wages. 

But a big change in the overtime rules has been happening in a subtle way. Back in the mid-1970s, the rule was that a salaried worker had to be paid at least $455/week to be exempt from the requirement to get paid time-and-a-half for overtime. But that $455/week hasn\’t been changed since then, even though it\’s value has been eaten away by inflation. Brown and Hamermesh calculate that $455/week was about double the median weekly earnings in the US economy back in the mid-1970s; now, it\’s about 50% of median weekly earnings. 
To put this another way, it used to be that you had to be earning a salary of double the typical weekly earnings before you were exempt from overtime rules. Now, you can be paid a much lower salary, half of typical weekly earnings, and you are still exempt from the overtime rules. The rules requiring overtime pay thus have gradually come to apply to many fewer workers over time. The Obama administration tried to raise the limit to $913/week by using an administrative rule, but the courts held (reasonably enough, in my view) that this kind of decision needed to be made by Congress passing a law. Apparently the Trump administration has now proposed raising the limit to $679/week.
What would happen if the rules were changed so that dramatically more workers needed to be paid overtime for working more than 40 hours/week? Presumably, some of these workers would get paid overtime, but in addition, employers would try to reduce the number of workers who ended up above that weekly limit. Brown and Hamermesh run through various calculations and look at some international evidence. They write: \”We can conclude that increasing the exempt limit would have raised some salaried workers’ earnings and reduced their weekly hours. One exercise suggested that 12.5 million workers would have been affected …\” 
The effects of changing the rules so that more workers are eligible for overtime pay aren\’t enormous. Still, for workers who are being paid salaries below the median weekly wage, and thus aren\’t eligible for overtime, it could be a meaningful gain. They write:

If we are interested in spreading work among more people and removing the United States from its current position as the international champion among wealthy countries in annual work time per worker, minor tinkering with current overtime laws will do little. We might borrow from some of the panoply of European mandates that alter the amount and timing of work hours. Among these are penalties for work on weekends, evenings, and nights and limits on annual overtime hours, while lengthening the accounting period for overtime beyond the current single week. If our goal is to spread work and make for a more relaxed society, these changes will help but their effects will also be small.

____________

The Roundup Case: Problems with Implementing Science-Based Policy

Imagine, just for the sake of argument, that you are open-minded about the question of whether the weed-killer Roundup (long produced by Monsanto, which was recently acquired by Bayer AG) causes cancer. You want to make a decision based on scientific evidence. However, you aren\’t a scientist yourself, and you don\’t feel competent at trying to read scientific studies.

Geoffrey Kabat asks \”Who’s Afraid of Roundup?\” in the Fall 2019 issue of Issues in Science and Technology. More broadly, he uses the controversy over Roundup as a way to ask about the role of science in decision-making.

When it comes to Roundup and its active ingredient glyphosate, the Environmental Protection Agency has continually said that \”there are no risks to public health when glyphosate is used in accordance with its current label and that glyphosate is not a carcinogen.\” As Kabat points out:

The US Environmental Protection Agency’s recent assessment is only the latest in a succession of reports from national regulatory agencies, as well as international bodies, that support the safety of glyphosate. These include Health Canada, the European Food Safety Authority (EFSA), the European Chemicals Agency, Germany’s Federal Institute for Risk Assessment, and the Food and Agriculture Organization of the United Nations, as well as health and regulatory agencies of France, Australia, New Zealand, Japan, and Brazil.

But just when you find yourself deeply relieve that the experts have reached a consensus, you ind that one agency disagrees. In 2015, International Agency for Research on Cancer listed glyphosate is a “probable carcinogen.” There are lots of reasons to be dubious about the IARC decision, and to believe the consensus of all the other agencies around the world, and Kabat runs through quite a list. Here are a few of his points:

  • Unlike the other health-and-safety agencies, the IARC ignores the size of the dose. Thus, for example, when IARC evaluated 500 agents and chemicals while ignoring the size of the dose, it found that 499 of them were possible carcinogens. Other agencies take the dose into account.
  • The IARC evaluation looked only at certain parts of some studies of how glyphosate affected rodents. Reanalysis of the same studies found that the \”IARC Working Group that conducted the assessment selected a few positive results in one sex and used an inappropriate statistical test to declare some tumor increases significant.\”
  • There is a major study funded by the National Cancer Institute looking at 54,000 pesticide applicators in Iowa and North Carolina. \” Indeed, when the results for glyphosate and cancer incidence  … were finally published in the Journal of the National Cancer Institute, in 2018, the paper reported no significant increases …\” 
  • A key scientist for the IARC process both led the way in designating glyphosate as a substance to be studied and in writing the IARC report. Then two weeks after the report came out, this scientist \”signed a lucrative contract to act as a litigation consultant with a law firm—Lundy, Lundy, Soileau, and South—engaged in bringing lawsuits against Monsanto for Roundup exposure.\”

In a bigger picture sense, the actual science over Roundup and glyphosate becomes almost irrelevant  to the public disputes. The scientific question of whether glyphosate is a carcinogen is treated as identical to the question of whether one is anti-pesticide, anti-genetic modification, and anti-Big Agriculture.

The result is  what the head of the European Food Safety Authority called \”the Facebook age of science.\” As background, the European agencies are well-known for their willingness to invoke the \”precautionary principle\”–basically, if we aren\’t sure and it might cause a problem, we should prohibit it. In this spirit, a group of almost 100 scientists wrote to EFSA to complain about their decision allowing glyphosate. Here\’s how Bernhard Url, the head of EFSA, responded:

You have a scientific assessment, you put it on Facebook, and you count how many people ‘like’ it. For [EFSA], this is no way forward. We produce a scientific opinion, we stand for it, but we cannot take into account whether it will be liked or not. … People that have not contributed to the work, that have not seen the evidence most likely, that have not had the time to go into the detail, that are not in the process, have signed a letter of support [for a ban on glyphosate]. Sorry to say that, for me, with this you leave the domain of science, you enter into the domain of lobbying and campaigning. And this is not the way EFSA goes.

Roundup is of course just one product, but the issue of how science will be used in public policy is of course much broader. For example, if a lawsuit alleges that Roundup causes cancer, the truth of that accusation presumably matters. As Kabat points out, it \”should come as no surprise that the same factors that are at work here are at work in many other areas, whether electromagnetic fields, cell phone `radiation,\’ so-called endocrine disrupting chemicals, numerous aspects of diet, cosmetic talc, GMOs, vaccines, nuclear power, or climate change.\”

In my own contentious way, I find it especially interesting when people make strong appeals to a  scientific consensus in one area, but then dismiss it in other areas. For example, those who  believe that action should be taken to reduce greenhouse gas emissions sometimes accuse their opponents of denying \”the science.\” But on occasion, it then turns out that those who wrap themselves in the mantle of \”the science\” when it comes to climate change turn out to oppose vaccinations or Roundup. The idea of whether to build the Keystone XL oil pipeline across Canada and into the United States went through multiple environmental reviews during the Obama administration, each one finding it would not have a negative effect. For those protesting the pipeline, like for those writing group letters to the European regulators about glyphosate, the \”science\” was only acceptable if it supported their prior beliefs.

One of my favorite examples about the \”science\” and popular beliefs involves the irradiation of food. For a quick overview, Tara McHugh describes \”Realizing the Benefits of Food Irradiation\” in the September 2019 issue of Food Technology Magazine. As she notes, the Food and Drug Administration recently approved irradiation for fresh fruits and vegetables, and it had already been approved for a range of other food products. McHugh writes:

The global food irradiation market was valued at $200 million in 2017 and was projected by Coherent Market Insights to grow at a 4.9% combined annual growth rate from 2018 to 2026. This projects the market size to rise to $284 million by 2026. This high growth rate was envisioned due to increased consumer acceptance since the U.S. Food and Drug Administration (FDA) approved phytosanitary treatment of fresh fruits and vegetables by irradiation. The food irradiation market in Asia is also growing very rapidly owing to approval of government agencies in India and other countries. Presently over 40 countries have approved applications to irradiate over 40 different foods. More than half a million tons of food is irradiated around the globe each year. About a third of the spices and seasonings used in the United States are irradiated.

It would be interesting to see a Venn diagram showing how many of those who believe in \”the science\” when it comes to climate change also believe in \”the science\” when it comes to the safety of Roundup, vaccinations, or irradiating food. Or perhaps there is a human cognitive bias which is more prone to believe \”the science\” when it warns of danger, but less likely to believe \”the science\” when it tells us that something we believe to be dangerous (or something that we oppose on other grounds) is actually safe. 

Flexible vs. Deep: What are the Ties that Bind a Firm Together?

One of the classic questions in economics is about what determines what is inside or outside a company: that is, why do companies buy some inputs or higher some workers from outside their firm through market transactions, but hire other workers to produce certain inputs inside the firm? Economists will recognize this as the central question posed by Roland Coase (Nobel \’91) in his famous 1937 essay, \”The Nature of the Firm\” (Economica, November 1937, pp. 386-405). Coase points out that economic activity within firms is coordinated by conscious administrative action, while economic activity between firms is coordinated by supply and demand. In one passatge that always makes me smile, Coase writes (footnotes omitted):

As D. H. Robertson points out, we find \”islands of conscious power in this ocean of unconscious co-operation like lumps of butter coagulating in a pail of buttermilk.” But in view of the fact that it is usually argued that co-ordination will be done by the price mechanism, why is such organisation necessary? Why are there these “islands of conscious power”? Outside the firm, price movements direct production, which is co-ordinated through a series of exchange transactions on the market. Within a firm, these market transactions are eliminated and in place of the complicated market structure with exchange transactions is substituted the entrepreneur-co-ordinator, who directs production. It is clear that these are alternative methods of co-ordinating production.

The line between what activities are coordinate more effectively by administrative action inside a firm and what is coordinated more effectively by market actions between firms shifts over time, and across different types of firms. One current example is the number of firms that sell manufactured goods but are \”factoryless\”–that is, they don\’t own or manage the factory in which their goods are produced. Diane Coyle and David Nguyen offer some recent examples in \”\”No plant, no problem? Factoryless manufacturing and economic measurement\” (ESCoE Discussion Paper 2019-15 September 2019). In a short overview of that paper, they write:

Did you know that Mercedes does not actually produce its heavy-duty G-Class? To be fair, it does keep design, development and marketing of the SUV in-house, but the vehicle is entirely built in the factory of Magna Steyr, a contract manufacturer based in Graz, Austria. In the same plant one will also find entire production lines for the Jaguar I-Pace and E-Pace, as well as BMW’s Series 5.

Coyle and Nguyen are focused on the question of how to measure \”output\” of an economy if design, development, and marketing are in one place, but the physical production intimately inked to design development and marketing is in another. For a previous discussion of factoryless manufacturing, in the US economy, see \”Factoryless Goods Producing Firms\” (May 16, 2015).

In thinking about these shifting lines between  what is produced inside and outside a company, I was i intrigued by the Edward Tenner\’s discussion of \”a long-developing tension between two iconic corporate models: the flexible organization and the deep organization\” in \”The 737 MAX And the Perils of the Flexible Corporation\” (Milken Institute Review, Fourth Quarter 2019, pp. 36-49). Tenner describes the difference in this way:

Depth is not just a matter of corporate size or scale. It is an attitude of public responsibility. Executives of a deep organization may strive for the highest possible profits — but only in the context of a perceived essential role in the social order. The gospel of flexibility, rooted in business-school doctrines of the primacy of shareholder value … seeks to preserve freedom of short-term optimization and cost reduction. By contrast, the gospel of depth has been mainly a tacit one, based on the idea that a dominant organization has a distinct role in the social order. It seeks to serve multiple stakeholders, to provide safety and security to consumers even if it raises costs and to plan for its long-term future. Deep organizations have often subscribed to what has been called welfare capitalism, providing impressive health, educational and recreation services for employees, expecting exceptional loyalty and higher productivity in return. Many, though not all, deep organizations have government ties and semi-official roles. John D. Rockefeller’s Standard Oil was not a deep organization in this sense; AT&T before the breakup of Ma Bell was.

A deep organization has extraordinary in-house capabilities, managed administratively inside the firm. A flexible organization is more likely to add a mixture of shifts in locations or outside contractors where this seems to increase efficiency and to raise profits. 

Tenner uses Boeing as an example of a shift from a \”deep\” to a \”flexible\” organization. As an illustration from the days of a \”deep\” Boeing, he tells the story of the launch of the 747 back in 1969:

In 1968, construction of the first 747 from scratch, a plane that was radically different than any existing aircraft, was completed in only 29 months. Boeing had to build an entirely new factory (the world’s largest) to produce it. But the company already had the asset on hand that mattered most, a staff of some 50,000 experienced engineers, technicians and managers known within the company as “the Incredibles.” In contrast to the trial and error of many engineering projects, the 747 was completed with such remarkable precision that the head of the project, Joe Sutter, could predict exactly where on the runway the plane would take off — and test pilots lauded its handling from Day 1. 

A deep organization, like Boeing in the 1960s and 1970s, has not only a reservoir of professional skills, but also an esprit de corps that can resist policies that threaten the mission. At one point in the project, Boeing senior management wanted to cut 1,000 of the 4,500 engineers assigned to the 747. But at the risk of his own job, Sutter walked out of the meeting at which the proposal was made; Sutter prevailed.

But over time, Boeing shifted toward flexibility. It moved its headquarters from Seattle to Chicago. It opened a major production facility in South Carolina. There were strong arguments for these and other changes, but there were also tradeoffs in terms of in-house expertise and cohesiveness. And when it came time to design and build the 737 MAX, two of which crashed earlier this year. There are typically multiple reasons for plane crashes, and this is no exception, But some of the reasons that have been proposed in media reports include problems with manufacturing in the South Carolina facility \”including manufacturing debris left in finished aircraft,\” \”some key software of the Boeing 737 MAX had been developed by $9-an-hour programmers outsourced abroad,\” and \”yhe FAA’s delegation of some essential monitoring tasks to Boeing employees.\”

As Teller argues, we are in an age of flexibility gospel, and the advantages of flexibility in many contexts are very real. However, Teller is pushing back, gently, pointing out that deep organizations have their benefits, too.

For example, deep organizations often had in-house corporate research laboratories, which often proved remarkably fertile places for the interaction of cutting-edge science and real-world production issues.  And sometimes those corporate research labs produced extraordinarily important spinoff innovations like the transistor or the laser, or even the idea of the relational database.

Deep organizations sometimes provided quality that was so high it was sometimes redundant. In the case of AT&T during its days as a deep organization before it was broken up in 1984, Teller notes:

And, in the 1950s, handsets designed by the renowned Henry Dreyfuss and manufactured by its own Western Electric subsidiary, were rated to stand decades of punishing use. One result was the Bell System’s astonishing reliability rate of 99.999 percent call completion. In 2011, a senior Google executive acknowledged that the web had yet to achieve reliability even remotely as high.

Products that seem overengineered can look like a place for cost-cutting, but the lessons learned from producing in this way can be valuable–and when it comes to airplane manufacturing, overengineering and redundancy in the name of safety seems important. 
There is also a certain kind of deep expertise that can be developed by in-house professonals. Teller writes:

In 1989, the information scientist Michael J. Prietula and the Nobel economist Herbert A. Simon published an article in the Harvard Business Review, “The Experts in Your Midst,” on the wealth of knowledge and capabilities that underappreciated specialists in an organization possess. Theoretically, a lean, agile company might try to substitute outside consultants and temporary workers for in-house talent. Yet because in-house professionals bring years of tacit knowledge to problems, they paradoxically may be better equipped to find solutions than experts unfamiliar with the organization’s workings. The sociologist Chandra Mukerji has even argued that the government sponsors academic oceanography generously not because its findings are directly applicable to, say, Navy operations, but because its support creates a reserve army of scientific experts for urgent needs.

There\’s no single \”right\” answer to whether a company should be flexible or deep. Factoryless may work just fine for some firms. Deep in-house expertise may be important to others. Teller cites Google as a leading example of a modern \”deep\” firm. He writes:

From the perspective of 2020, the state of deep organization in American industry is not as discouraging as it appeared a generation earlier. Google, still an obscure academic startup in the late 1990s, is the old AT&T’s successor as a deep organization, hegemonic in the new field of online search as the Bell System had been in telecommunications. In 2017, fully 16 percent of Google employees held PhD degrees, over three times the proportion at Microsoft, Apple and Amazon. If a measure of a deep organization is its efforts to plan the future privately, it is hard to think of any 20th-century corporation’s plans as ambitious (and controversial) as those of a Google subsidiary for creating a network-controlled smart city in Toronto.

Although it\’s not one of Teller\’s themes, I would also add that the flexible corporation has strong incentives for how it invests in the human capital of its workers. When a deep firm expects many of its workers to stay for a substantial period of time, it has an incentive to invest in their skills and training, to view them as candidates for future promotions, to encourage them to build up specific knowledge about the company, and in general to envision the company as made up of people who are building longer-term careers at that company. When a flexible firm buys from outside, it doesn\’t need to care much about those who work for its suppliers. The notion of firm and employee making investments in a long-term career is diminished. Workers instead need to think semi-continually about where their next job will be, and firms need to think semi-continually about hiring from outside to replace departing workers. The notion that many workers will eventually settle into a career progression of rising skills and pay with a single employer comes to seem outdated and obsolete. That, too, is a tradeoff of the flexible firm.

Some Thoughts about Populism

\”Populism\” is remarkably slippery to define, but many people claim to know it when they see it–and to worry about its resurgence. Here, I\’ll offer some thoughts about the current populist moment. I\’ve spent some time thinking about this lately because the Journal of Economic Perspectives, where I work as Managing Editor, published a four-paper \”Symposium on Modern Populism\” in the Fall 2019 issue. The papers are:

Also, the Centre for Economic Policy Research in London has started a Research and Policy Network on Populism, and recently published four short and readable essays on the topic at its VoxEU website:

Defining Populism? 

Definitions of \”populism\” are sometimes very broad. For example, the Merriam-Webster definition is that it refers to a party \”claiming to represent ordinary people.\”  This has an element of truth, but it\’s difficult to think of a successful political party which would not make such a claim! Other definitions focus on a party that wants to redistribute to the poor. Again, this has an element of truth, but it seems overly broad.

The politicians who are commonly referred to as populists do claim to represent the ordinary people and the poor, but they also have an us-vs.-them edge. It\’s not just wanting to  help the poor, but also a broader narrative that ordinary people are being ill-treated by identifiable villains. Sometimes the designated villains are economic, like big corporations and the rich. Sometimes the designated villains have a foreign tinge, like those who allow imports of foreign products or a surge of immigrants. Populism typically involves both an economic claim that ordinary people are being left behind or mistreated, and also a broader political/cultural dimension that elites don\’t understand and are taking advantage.

Part of what makes \”populism\” hard to define is that it is often used as a criticism. The implication is that populists aren\’t just people who would advocate higher taxes on the rich or on corporations, but people who would demonize or practice confiscatory policies toward those groups. Populists wouldn\’t just argue for lower imports or enhanced border controls, but would describe these changes in terms of gross unfairness, plotted by the few against the many.

In short, the concern is that populists aren\’t technical analysts, arguing over the costs and benefits of shifting some policy parameters to help ordinary people or the poor. Instead, populists are whipping up surges of emotion to gain political power, while making political and social divisions worse and making policy promises that either can\’t be kept and won\’t work. Once populist emotions are fully roused, the polity may become disdainful of boring and irrelevant ideas like constraints on executive power, or an independent judiciary, media, and central bank. As the populist policy prescriptions inevitably fail, it may just feed the fire of populism further–by supposedly showing that the enemies of the ordinary people are even stronger than suspected, and must be countered with even more strong-handed interventions by a charismatic and authoritarian leader.

Economic Manifestations of Populism

For economists, perhaps the classic view of \”populism\” is based on a common pattern in Latin America, described in work by Rudiger Dornbusch and Sebastian Edwards back in the 1990s. They argued that populist regimes used a variety of policies–protectionism, agrarian reforms, controls and regulations, and the nationalization of large companies–but they argue that perhaps defining theme was a strong rise government spending.

At first, this rise in spending of often stimulated the economy, and in some cases a populist leader also had good economic luck –like an oil-exporting economy experiencing a rise in the price of oil. At this stage, there was often a lot of preening about how the populist prescription worked. There are often price controls to assure that everything remains affordable, and inflows of imported products as well. But as government budget deficits climb, the inflow of imports increases trade deficits, and the price controls and regulations and nationalizations start to choke off economic flexibility, problems start arising: shortages of goods and black markets, wages not keeping up with soaring inflation, macroeconomic problems repaying debt. Of course, a populist leader can use all these problems  to claim that even more extreme policies are needed, but economics is not a subject that greatly respects one\’s wishes. Eventually there is a crash and a clean-up.

Venezuela, or perhaps Greece, offer some recent examples of this kind of populism. But when talking about economic roots of modern populism, most commenters have in mind something less extreme. They are talking about communities that suffered economically during the Great Recession, or that suffered from the China import shock of the early 2000s, or that feel that their jobs and local cultural patterns are threatened by a surge of immigrants. As a result, the argument goes, they are motivated to vote for leaders who certainly aren\’t the same as classic  Latin American populists like Juan Peron in Argentina or Hugo Chavez in Venezuela, but for politicians and causes that seem to have a populist tinge: for Brexit in the UK, or the Alternative für Deutschland in Germany, the Sweden Democrats in Sweden, or for Donald Trump in the United States. The JEP essay by Italo Colantone and Piero Stanig goes through the connections from economic disruptions to support for populist parties in the context of European countries.

Cultural Manifestations of Populism

The economic roots of populism clearly have some explanatory power, but one can raise legitimate questions about whether they are the core driving force of modern populism.

For example, Yotam Margalit in his JEP essay points out that a number of studies focus on, say, how many votes for Brexit or President Trump and be traced to communities that were most severely hit by the China import shock or the Great Recession. One can often make a case that these economic events shifted  few percentage points of the vote, and so in a close election, they may have tipped the balance. But as Margalit notes, saying that a negative economic event changed voting patterns by a few percentage points doesn\’t explain the entire rest of the vote. When explaining support for populism, it seems important to consider not just the economic factors that affected 2-3% of the vote and tipped the balance in an election, but also the other 48-49% of the vote which did not depend on those economic factors. From this viewpoint, economic factor matter, but they are far from the entire story.

Concern over immigration is clearly a major issue uniting many modern populist parties. But it\’s not obvious that the economic consequences of immigration are the real issue here. It turns out that anti-immigration sentiment is often stronger in areas that have experienced negative economic shocks –whether or not those area have actually experiences more immigration. It also turns out that in public opinion surveys about immigration, anti-immigration sentiment is often much stronger if the questions specifically ask about immigrants who don\’t speak the language of the new country or who come from countries with different cultural or religious contexts. Margalit describes an alternative to a view that emphasizes economic causes as the roots of populism:

On this view, long-term structural social developments— increased access to higher education, growing ethnic diversity, urbanization, more equal gender roles—have led to greater acceptance of diverse lifestyles, religions, and cultures. These changes, and the perceived displacement of traditional social values, have caused a sense of resentment among segments of the population in the West, particularly among white men, older people, conservatives, and those with less formal qualifications. Increased exposure to foreign influences that comes with globalization, and even more so the effects of waves of immigration, has exacerbated the sense of a cultural and demographic threat. As a result, formerly predominant majorities have felt their social standing erode and have become increasingly receptive to populist charges against a disconnected, cosmopolitan elite that has turned its back on them. They have also bought into the populist nostalgia for a “golden age” of cultural homogeneity, traditional values, and a strong national identity. Hard economic times undermine the perceived competence of the economic and political elites and thus help fuel the populist distrust in them. Yet by this account, adverse economic change is a contributing factor and possibly a trigger. However, is not the root cause of widespread populist support.

Is Modern Populism Left or Right?  

In a US context, my sense is that populism has some appeal on all sides of the political spectrum, albeit in different ways.

For example, President Trump sounds populist notes with his \”Drain the Swamp,\” \”Fake News,\” and \”Build the Wall\” rhetoric. He seeks to build an image of himself as working on behalf of the ordinary people, who have not had their interests protected by international trade agreements and a lack of border enforcement. Populist leaders often seek to borrow and spend to stimulate the economy, and believe that the central bank should subordinate itself to this agenda. Trump follows these paterns. Populists often try to expand executive power, and take whatever actions they can by fiat, while lashing out at other institutions like Congress and the media. Trump has a PhD in lashing out.

But it seems clear that many Democratic politicians are offering appeals with a populist tone, too.  For example, the rhetoric from Senators Warren and Sanders about protecting the ordinary people from the predations of capitalism, corporate management, Wall Street, and the global economy often has a Trumpian tone. If populism is defined in part by politicians who promise dramatically higher spending, Warren and Sanders fit the bill. If populism is about expanding executive power, President Obama was the one who memorably stated \”I\’ve got a pen, and I\’ve got a phone,\” meaning that he would advance his policy agenda without working through Congress or the court system. Just as Trump has use executive authority to undo a number of the Obama administration pen-and-phone directives, Warren, Sanders, Biden, and others are promising to use executive power reinstate them and to add others of their own. When Democrats suggest packing the Supreme Court, ending the electoral college, and monitoring or limiting what political commentary will be allowed online or in advertisements, they are pushing back on  some of the established institutional constraints on power.

Of course, none of this is to say that leading Democratic candidates are \”just like\” President Trump. (No prominent American politician in my lifetime, of any party, is \”just like\” Trump.) For example, it seems plausible to argue that the Democrats are more likely to enunciate their views in the language of  economic populism, while Republicans are more likely to enunciate their views in the language of cultural populism. But there is some overlap here, and my point is that some deeper elements of the populist stance attract public support across party lines.

Is Modern Populism a Sign of a New Political Divide? 

For much of my lifetime, it has been common to characterize the main US political divide as labor vs. capital, or workers vs. corporations. But the divisions of modern populism don\’t quite work in this way. Colantone and Stanig describe the shift in this way:

[T]he recent political shifts may reflect a structural realignment of social groups and parties along new political dividing lines, which might be here to stay. In the half century after World War II, the politics of advanced western European democracies were structured to a large extent by a conflict between labor and owners of capital, and took the form of choices between more reliance on markets and deeper state intervention in the context of European economic and political integration. In the coming years, political conflict might capture a fundamental contraposition between winners and losers of structural changes in the economy, and may be centered mainly on a cosmopolitan versus nationalist conflict. The result could well be a credible restructuring of current traditional parties or the emergence of new parties that might assemble social constituencies in favor of inclusive globalization and technological progress. As such changes occur, the representation of vulnerable segments of society is not bound to be a prerogative of economic nationalist and radical-right forces. The challenge for believers in liberal policies is how to popularize a version of embedded liberalism that will be responsive to the current challenges of slow growth and structural economic shifts. 

Other writers have touched on a similar theme, but focusing on how modern societies are being sorted into urban areas where people live in a multicultural and international day-to-day world, often with socially liberal values, and who are reaping many of the benefits of economic growth, and those who are living in smaller cities and rural areas and see their local economy as stagnating and their place in society as diminishing.

The fundamental challenge is that we are living in a time of powerful underlying changes: in technology, communication, globalization, corporate structure, jobs, geographical sorting, marital sorting, an aging population, environmental dangers, and others. The stresses created by these changes are real, and addressing them is hard. But the populist impulse, whether it arrives from the right or the left, is rooted in would-be authoritarian executives who stoke us-vs.-them social divisions while trumpeting their unrealistic or harmful policies as an easy answer. 

Interview with Maureen Cropper: Environmental Economics

Catherine L. Kling and Fran Sussman have \”A Conversation with Maureen Cropper\” in the
Annual Review of Resource Economics (October 2019, 11, pp. 1-18). As they write in the introduction: Maureen has made important contributions to several areas of environmental economics, including nonmarket valuation and the evaluation of environmental programs. She has also conducted pioneering studies on household transportation use and associated externalities.\” There also is a short (~ a dozen paragraphs) overview of some of Cropper\’s best-known work.

I had not know that Cropper identified as a monetary economist when she was headed for graduate school. Here is her description of her early path to environmental economics:

My first formal introduction to economics was in college. I entered Bryn Mawr College in 1966. I had great professors at Bryn Mawr: Philip W. Bell, Morton Baratz, and Richard DuBoff. I learned microeconomics by reading James Meade\’s A Geometry of International Trade—that\’s how we were taught microeconomics by Philip Bell. It was really a very good grounding in economics. I got married as I graduated from college to Stephen Cropper (hence my last name), and I went to Cornell University because Stephen was admitted to the Cornell Law School. I was admitted to the Department of Economics at Cornell.

Frankly, my interests at the time were really in monetary economics, so I took several courses at the Cornell Business School, including courses in portfolio theory. My dissertation was on bank portfolio selection with stochastic deposit flows. My dissertation advisor was S.C. Tsiang. Henry Wan and T.C. Liu were also on my committee. Henry was a fantastic mentor and advisor. I would write a chapter of my dissertation and put it in his mailbox; the next day he would have it covered with comments. He was just an amazing advisor and very, very engaged. At this time, I was not doing anything in environmental economics. In fact, my first job offer was from the NYU Business School.

The reason I went into environmental economics is that I met Russ Porter in graduate school. Russ later became the father of my children. We decided that we would go on the job market together and looked for a place that would hire two economists. We wound up at the University of California, Riverside, which at the time was the birthplace of the Journal of Environmental Economics and Management (JEEM). I was on the job market in 1973, just when this journal was launched. Ralph d\’Arge was the chair of the department then. Tom Crocker also taught there, and Bill Schulze and Jim Wilen were students in the department.

It was going to UC Riverside that really caused me to switch fields and go into environmental economics. It was a very important decision, although I must say it was made partly for personal reasons. It\’s had a huge impact on my life.

In the interview and overview, it quickly becomes apparent that Cropper has worked on an unsummarizably wide array of topics. Examples include stated preference studies to estimate the value of a statistical life, which became the basis for estimates used by the OECD and in Canada.  Another study, became the basis for EPA regulations that estimated the value of avoiding a case of chronic bronchitis through air pollution regulations. Cropper worked on whether or not to ban certain pesticides under the Federal Insecticide, Fungicide, and Rodenticide Act, what methods to use in cleaning up Superfund sites, and whether to ban  certain uses of asbestos in under the Toxic Substances Control Act (TSCA). She worked how to use trading allowances to reduce sulfur dioxide (SO2) emissions under the Acid Rain Program.

Cropper worked on many issues involving air pollution in India. She worked on models of estimating household location choices in Baltimore and in Mumbai. The studies in Mumbai became the basis for looking at other policies: slum relocation or converting buses to compressed natural gas. She has estimated how the shapes of cities affect demand for travel, and studied cross-country data on relationships from growth to traffic fatalities.  The interview touches on these topics and more. Here\’s a description of one such study from Cropper:

When I first got to the World Bank I realized that, in India, there hadn\’t been any state-of-the-art studies on the impact of air pollution on mortality. This was around 1995, the time when important cohort studies by Arden Pope and Douglas Dockery were coming out in the United States.

There is also literature looking at the impact of acute exposures to air pollution—daily time-series studies of the impact of air pollution on mortality. With the support of the Bank, I was able to get information in Delhi from air quality monitors—four years of daily data, although monitoring was not done every day. I was also able to obtain data on deaths by cause and age. I worked with Nathalie Simon and Anna Alberini to carry out a daily time-series study of the impact of air pollution on mortality. …

We had a hard time getting the study published in an epidemiological journal because economists write up their results differently than epidemiologists. But we did document significant effects of particulate matter on mortality. And, it was important to do something early on and convince people in India that this sort of work could be done. (There have been many subsequent studies.) It is also interesting that the results we obtained in Delhi were similar to results obtained in other time-series studies in the United States.

When those at the EPA or the World Bank or the National Academy of Sciences were setting up an advisory committee or a consensus panel to produce a report or an evaluation, Cropper\’s name was perpetually on the short list. Her memory of one such experience gives a sense of why she has been in such high demand:

I learned so much in my time serving on the EPA Science Advisory Board. I actually began there in the 1990s when the retrospective analysis of the Clean Air Act—the first Section 812 study—was being written. Dick Schmalensee was the head of that committee. I actually chaired the review of the first prospective 812 study of the benefits and costs of the 1990 Clean Air Act Amendments.

I also preceded you, Cathy, as the head of the Environmental Economics Advisory Committee at EPA. I learned a lot being on these EPA committees. In terms of the 812 studies, you\’ve got a subcommittee that\’s dealing with the health impacts: epidemiologists and toxicologists. You have air quality modelers and people who are exposure measurement experts. And of course, you also have economists. It\’s a fantastic opportunity to be exposed to all parts of the analysis. If you are concerned about air pollution policy, which is what I\’ve worked on the most, you need to get the perspective of all of these different disciplines. 

I was also interested in Cropper\’s comments on how, in the area of environmental economics, theoretical research has diminished and empirical work has become more prominent. My sense is that this is broadly true for many fields of economics. Cropper says 

I think quasi-experimental econometrics are one of the things that graduate students really do learn nowadays. Graduate students are also learning structural approaches. If you want to estimate the welfare impacts of corporate average fuel economy (CAFE) standards on the new car market, you\’ve got to use a structural model. You also have students who study empirical industrial organization, bringing those techniques to bear in environmental economics. In terms of the percentage of work that is done today that is more theoretically based, my impression is that theoretical research really has declined, in terms of the number of purely theoretical papers written or even papers that are using theoretical approaches.\\

The emphasis on theory has also changed during the time I have been teaching. When I was teaching a graduate class a few years ago, we were talking about discounting issues and, of course, the Ramsey formula. Students had heard of the Ramsey formula, but when I asked students if they knew who Frank Ramsey was, I was surprised to find that they didn\’t know. The fact is, I think there has been this shift. When I teach environmental economics, the preparation of students in terms of econometric techniques is really quite impressive. I\’ve got to say that has really been ramped up. That represents an important change in the profession … 

Fall 2019 Journal of Economic Perspectives Available Online

I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided–to my delight–that it would be freely available on-line, from the current issue back to the first issue. You can download it various e-reader formats, too. Here, I\’ll start with the Table of Contents for the just-released Fall 2019 issue, which in the Taylor household is known as issue #130. Below that are abstracts and direct links for all of the papers. I will probably blog more specifically about some of the papers in the next week or two, as well.

_________________

Symposium on Fiftieth Anniversary of the Clean Air and Water Acts

\”What Do Economists Have to Say about the Clean Air Act 50 Years after the Establishment of the Environmental Protection Agency?\” by Janet Currie and Reed Walker

Air quality in the United States has improved dramatically over the past 50 years in large part due to the introduction of the Clean Air Act and the creation of the Environmental Protection Agency to enforce it. This article is a reflection on the 50-year anniversary of the formation of the Environmental Protection Agency, describing what economic research says about the ways in which the Clean Air Act has shaped our society—in terms of costs, benefits, and important distributional concerns. We conclude with a discussion of how recent changes to both policy and technology present new opportunities for researchers in this area.
Full-Text Access | Supplementary Materials

\”Policy Evolution under the Clean Air Act,\” by Richard Schmalensee and Robert N. Stavins

The US Clean Air Act, passed in 1970 with strong bipartisan support, was the first environmental law to give the federal government a serious regulatory role, established the architecture of the US air pollution control system, and became a model for subsequent environmental laws in the United States and globally. We outline the act\’s key provisions, as well as the main changes Congress has made to it over time. We assess the evolution of air pollution control policy under the Clean Air Act, with particular attention to the types of policy instruments used. We provide a generic assessment of the major types of policy instruments, and we trace and assess the historical evolution of the Environmental Protection Agency\’s policy instrument use, with particular focus on the increased use of market-based policy instruments, beginning in the 1970s and culminating in the 1990s. Over the past 50 years, air pollution regulation has gradually become more complex, and over the past 20 years, policy debates have become increasingly partisan and polarized, to the point that it has become impossible to amend the act or pass other legislation to address the new threat of climate change.
Full-Text Access | Supplementary Materials

\”US Water Pollution Regulation over the Past Half Century: Burning Waters to Crystal Springs?\” by David A. Keiser and Joseph S. Shapiro

In the half century since the founding of the US Environmental Protection Agency, public and private US sources have spent nearly $5 trillion ($2017) to provide clean rivers, lakes, and drinking water (annual spending of 0.8 percent of US GDP in most years). Yet over half of rivers and substantial shares of drinking water systems violate standards, and polls for decades have listed water pollution as Americans\’ number one environmental concern. We assess the history, effectiveness, and efficiency of the Clean Water Act and Safe Drinking Water Act and obtain four main conclusions. First, water pollution has fallen since these laws were passed, in part due to their interventions. Second, investments made under these laws could be more cost effective. Third, most recent studies estimate benefits of cleaning up pollution in rivers and lakes that are less than the costs, though these studies may undercount several potentially important types of benefits. Analysis finds more positive net benefits of drinking water quality investments. Fourth, economic research and teaching on water pollution are relatively uncommon, as measured by samples of publications, conference presentations, and textbooks.
Full-Text Access | Supplementary Materials

Symposium on Modern Populism
\”On Latin American Populism, and Its Echoes around the World,\” by Sebastian Edwards

In this article, I discuss the ways in which populist experiments have evolved historically. Populists are charismatic leaders who use a fiery rhetoric to pitch the interests of \”the people\” against those of banks, large firms, multinational companies, the International Monetary Fund, and immigrants. Populists implement redistributive policies that violate the basic laws of economics, and in particular budget constraints. Most populist experiments go through five distinct phases that span from euphoria to collapse. Historically, the vast majority of populist episodes end up badly; incomes of the poor and middle class tend to be lower than when the experiment was launched. I argue that many of the characteristics of traditional Latin American populism are present in more recent manifestations from around the globe.
Full-Text Access | Supplementary Materials

\”Informational Autocrats,\” by Sergei Guriev and Daniel Treisman

In recent decades, dictatorships based on mass repression have largely given way to a new model based on the manipulation of information. Instead of terrorizing citizens into submission, \”informational autocrats\” artificially boost their popularity by convincing the public they are competent. To do so, they use propaganda and silence informed members of the elite by co-optation or censorship. Using several sources, including a newly created dataset on authoritarian control techniques, we document a range of trends in recent autocracies consistent with this new model: a decline in violence, efforts to conceal state repression, rejection of official ideologies, imitation of democracy, a perceptions gap between the masses and the elite, and the adoption by leaders of a rhetoric of performance rather than one aimed at inspiring fear.
Full-Text Access | Supplementary Materials

\”The Surge of Economic Nationalism in Western Europe,\” by Italo Colantone and Piero Stanig

We document the surge of economic nationalist and radical-right parties in western Europe between the early 1990s and 2016. We discuss how economic shocks contribute to explaining this political shift, looking in turn at theory and evidence on the political effects of globalization, technological change, the financial and sovereign debt crises of 2008–2009 and 2011–2013, and immigration. The main message that emerges is that failures in addressing the distributional consequences of economic shocks are a key factor behind the success of nationalist and radical-right parties. We discuss how the economic explanations compete with and complement the \”cultural backlash\” view. We reflect on possible future political developments, which depend on the evolving intensities of economic shocks, on the strength and persistence of adjustment costs, and on changes on the supply side of politics.
Full-Text Access | Supplementary Materials

\”Economic Insecurity and the Causes of Populism, Reconsidered,\” Yotam Margalit

Growing conventional wisdom holds that a chief driver of the populist vote is economic insecurity. I contend that this view overstates the role of economic insecurity as an explanation in several ways. First, it conflates the significance of economic insecurity in influencing the election outcome on the margin with its significance in explaining the overall populist vote. Empirical findings indicate that the share of populist support explained by economic insecurity is modest. Second, recent evidence indicates that voters\’ concern with immigration—a key issue for many populist parties—is only marginally shaped by its real or perceived repercussions on their economic standing. Third, economics-centric accounts of populism treat voters\’ cultural concerns as largely a by-product of experiencing adverse economic change. This approach underplays the reverse process, whereby disaffection from social and cultural change drives both economic discontent and support for populism.
Full-Text Access | Supplementary Materials

Articles

\”What They Were Thinking Then: The Consequences for Macroeconomics during the Past 60 Years,\” by George A. Akerlof

This article explores the development of Keynesian macroeconomics in its early years, and especially in the Big Bang period immediately after the publication of The General Theory. In this period, as standard macroeconomics evolved into the \”Keynesian-neoclassical synthesis,\” its promoters discarded many of the insights of The General Theory. The paradigm that was adopted had some advantages. But its simplifications have had serious consequences—including immense regulatory inertia in response to massive changes in the financial system and unnecessarily narrow application of accelerationist considerations (regarding inflation expectations).
Full-Text Access | Supplementary Materials

\”The Impact of the 2018 Tariffs on Prices and Welfare,\” by Mary Amiti, Stephen J. Redding and David E. Weinstein

We examine conventional approaches to evaluating the economic impact of protectionist trade policies. We illustrate these conventional approaches by applying them to the tariffs introduced by the Trump administration during 2018. In the wake of this increase in trade protection, the United States experienced substantial increases in the prices of intermediates and final goods, dramatic changes to its supply-chain network, reductions in availability of imported varieties, and the complete pass-through of the tariffs into domestic prices of imported goods. Therefore, the full incidence of the tariffs has fallen on domestic consumers and importers so far, and our estimates imply a reduction in aggregate US real income of $1.4 billion per month by the end of 2018. We see similar patterns for foreign countries that have retaliated with their own tariffs against the United States, which suggests that the trade war has also reduced the real income of these other countries.
Full-Text Access | Supplementary Materials

\”Retrospectives: Tragedy of the Commons after 50 Years,\” by Brett M. Frischmann, Alain Marciano and Giovanni Battista Ramello

Garrett Hardin\’s \”The Tragedy of the Commons\” (1968) has been incredibly influential generally and within economics, and it remains important despite some historical and conceptual flaws. Hardin focused on the stress population growth inevitably placed on environmental resources. Unconstrained consumption of a shared resource—a pasture, a highway, a server—by individuals acting in rational pursuit of their self-interest can lead to congestion and, worse, rapid depreciation, depletion, and even destruction of the resources. Our societies face similar problems, with respect to not only environmental resources but also infrastructures, knowledge, and many other shared resources. In this article, we examine how the tragedy of the commons has fared within the economics literature and its relevance for economic and public policies today. We revisit the original piece to explain Hardin\’s purpose and conceptual approach. We expose two conceptual mistakes he made: conflating resource with governance and conflating open access with commons. This critical discussion leads us to the work of Elinor Ostrom, the recent Nobel Prize in Economics laureate, who spent her life working on commons. Finally, we discuss a few modern examples of commons governance of shared resources.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor

The Hearing Aid Example: Why Technology Doesn\’t Reduce Trade

Will the new technologies of 3D printing and robotics  lead to a reduction in international trade? After all, IF countries can use 3D printing and robotics to make goods at home, why import from abroad?

But there is a fascinating counterexample to these fears: the case of hearing aids. Around the world, they are nearly 100% produced by 3D printing. But international trade in hearing aids is rising, not falling. Caroline Freund, Alen Mulabdic, and Michele Ruta discuss this and other examples in \”Is 3D Printing a Threat to Global Trade? The Trade Effects You Didn’t Hear About\” (World Bank Policy Research Working Paper 9024, September 2019). For a readable summary, the authors have written a short overview article at VoxEU as well.

The authors point to a prediction that 3D printing could eliminate as much as 40% of all world trade by 2040. But actual examples like hearing aids  don\’t seem to be working out this way. As they note:

3D printers transformed the hearing aid industry in less than 500 days in the mid-2000s, which makes this product a unique natural experiment to assess the trade effects of this technology. … The intuition for the results is that 3D printing led to a reduction in the cost of production. Demand rose and trade expanded. There is no evidence that 3D printing shifted production closer to consumers and displaced trade. One reason is that hearing aids are light products which makes them relatively cheap to transport internationally -we come back to this point below. A second reason is because printing hearing aids in high volumes requires a large investment in technology and machinery and the presence of highly specialized inputs and services. The countries that were early innovators, Denmark, Switzerland and Singapore, remain the main export platforms. Some middle-income economies such as China, Mexico and Vietnam have also been able to substantially increase their market shares between 1995 and 2015. As a result, exports did not become more concentrated in the top producing countries following the introduction of 3D printing.

Data from the US market shows the effect of 3D printing of hearing aids on prices, quality–and thus expanded use (citations and footnotes omitted):

The new technology fundamentally changed the industry because it produced a better product at a lower cost. The change is visible in US import price data and hearing aid usage. The United States is the number one importer of hearing aids and has relatively accurate data on unit prices. … [T]he unit value of hearing aids imported into the United States dropped by around 25 percent after 2007, right around when the technology was adopted. Hearing aid usage also increased dramatically. From 2001 to 2008 only about 26 percent of the population above 70 with hearing loss used hearing aids, and the share was flat over the period. From 2008 to 2013 (last year of data), the share increased to 32 percent. Despite the potential benefits from the use of hearing aids, stigma, discomfort and cost had been among the most frequent reasons for rejecting the use of hearing instruments

What about other industries where 3D printing is important? In preliminary work looking across 35 different industries, Freund, Mulabdic, and Ruta find the same general pattern: that is, 3D printing leads to lower prices and thus benefits for consumers, including consumers in developing countries, but no particular shift in trade patterns. They write:

One example comes from dentistry, where custom products are in high demand but are being manufactured and exported by high-tech firms. Consider Renishaw, a British engineering company, that makes dental crowns and bridges from digital scans of patients’ teeth. The printers run for 8-10 hours to make custom teeth from cobalt-chrome alloy powder, which are then exported. Dentists are not installing the machines to print teeth locally, rather the parts are shipped to dental labs in Europe, where a layer of porcelain is added before the teeth are shipped to dentists. With 3D printing, the production process changed but the supply chain remains intact. In addition to teeth, the innovative technology is also being used for several other goods, from running shoes to prosthetic limbs.

Remembering the Cadillac Tax

When employers pay the health insurance premiums for their employees, these payments are exempt from income tax. If health insurance payments by employers were taxed as income, the government would collect about $200 billion in additional income taxes, and another $130 billion in payroll taxes for supporting Social Security and Medicare (according to the Analytical Perspectives volume of the US budget for 2020, Table 16-1).

The notion that the US would finance its private-sector health insurance system in this way is an historical accident going back to World War II. As Melissa Thomasson explains at the website of the Economic History Association:

During World War II, wage and price controls prevented employers from using wages to compete for scarce labor. Under the 1942 Stabilization Act, Congress limited the wage increases that could be offered by firms, but permitted the adoption of employee insurance plans. In this way, health benefit packages offered one means of securing workers. … Perhaps the most influential aspect of government intervention that shaped the employer-based system of health insurance was the tax treatment of employer-provided contributions to employee health insurance plans. First, employers did not have to pay payroll tax on their contributions to employee health plans. Further, under certain circumstances, employees did not have to pay income tax on their employer’s contributions to their health insurance plans.

The idea that US employers will often pay for health insurance, and that this will be an important element of what most Americans mean by a \”good job,\” is embedded in how most of think about the US healthcare system. But it\’s worth being clear on its distributional effects and economic incentives it provided. When employers provide a benefit with the value exempt from income tax, it will naturally offer greater benefit to those with high incomes, who otherwise would have paid higher income taxes. In addition, when employer-provided health insurance is tax-free, people will have an incentive to receive compensation in this tax-free form, rather than in a taxed form.

Katherine Baicker describes these dynamics in her recent 2019 Martin Feldstein Lecture at the National Bureau of Economic Research, \”Economic Analysis forEvidence-Based Health Policy:Progress and Pitfalls\” (NBER Reporter, September 2019). She says:

On the private side, the dominance of the employer-sponsored insurance market is driven in large part by the tax preference for health insurance benefits relative to wage compensation, which also drives down cost-sharing, since care covered through insurance plan premiums is often tax-preferred to out-of-pocket spending. This aspect of the tax code is thus both inefficient (driving inefficient utilization through moral hazard) and regressive (favoring people with higher incomes and more generous benefits)—a rare opportunity to improve both efficiency and distribution through reform.

This is a prime example of the challenge of translating economic insights into policy: Even though economists on both sides of the aisle agreed, proposing the taxation of employer-sponsored insurance to policymakers and the public was not popular. The “Cadillac tax” on expensive plans came into existence largely because it was nominally levied on insurers rather than taxpayers. This made it more politically palatable, even though it does not mean that the ultimate incidence falls on insurers, and it constrains the degree to which it can undo the regressivity of the tax treatment of employer-sponsored insurance. Earlier this year, the House voted to repeal the Cadillac tax; whether it will ever take effect remains an open question.

What is this \”Cadillac tax\” she is talking about? As part of the Patient Protection and Affordable Care Act of 2010, there was a provision that if someone was receiving very high-cost health insurance from an employer, there would be a tax equal to 40% of the value of the health insurance benefits above a certain level. The bill was careful to specify that this tax would be paid by employers–but of course, it would be reflected the design of health insurance plans and in overall compensation received by workers.  

In a standard example of the elegant dance moves that make up the budgeting process, the 2010 legislation bravely postponed the imposition of the Cadillac tax until 2018, which would be two years after President Obama left office even if he served a second term. Thus, the revenues from collecting the Cadillac tax could be counted in the long-run budget projections for the legislation–to show it wouldn\’t cost too much over time–but the actual tax was comfortably off in the future. 
Quite predictably, the Cadillac tax was then postponed from 2018 to 2020, and then to 2022. It its latest version: \”This `Cadillac tax\’ will equal 40 percent of the value of health benefits exceeding thresholds projected to be $11,200 for single coverage and $30,150 for family coverage in 2022.\” 
In July, the Democrat-controlled House of Representatives voted to repeal the Cadillac tax altogether. It\’s not yet clear whether the Republican-controlled Senate will go along. Perhaps the repeal of the Cadillac tax will get stuck in the gears of politics for a little longer. But at this point there\’s no reason to believe that it will ever actually go into effect. 

I have my doubts about how the 2010 Cadillac tax was designed. For example, various health care analysts have argued that such a law might set certain thresholds, and then just have any health insurance benefits above that level taxed as regular income. I also have my cynical doubts about whether politicians back in 2010 intended that the Cadillac tax would ever go into effect. But whatever the details of the design or the underling motivations, the Cadillac tax was a modest effort. If allowed to take effect, it would raise about $8 billion in 2022, rising to $38 billion by 2028 (according to the Congressional Budget Office, see discussion starting on p. 231). It was the only meaningful effort to reduce the tax exemption for employer-provided health insurance, and to use that money to make health insurance more affordable for others. And it seems to be politically impossible.