Reskilling over a Lifetime

The usual pattern of spending on skill development during a lifetime of an American looks like this:

The figure is from a report by the White House Council of Economic Advisers, titled \”Addressing America’s Reskilling Challenge\” (July 2018). The blue area shows public education spending, which is high during K-12 years, but the average spending per person drops off during college years. After all, many people don\’t attend college, and of those who do many don\’t attend a public college. Private education spending shown by the red area takes off during college years, and then trails off through the 20s and 30s of an average person. By about age 40, public and private spending on education and skills training is very low. Spending on formal training by employers, shown by the gray area, does continue through most of the work-life.

The figure focuses on explicit spending, not on informal learning on the job As the report notes: \”Some estimates suggest that the value of these informal training opportunities is more than twice that of formal training.\” Nonetheless, it is striking that the spending on skills and human capital is so front-loaded in life. The report cites estimates that over a working lifetime from ages 25-64, the average employer spending per person on formal training totals about $40,000.

The report mentions an array of programs to address mid-life reskilling and lifelong-efforts by nonprofits, by certain states, by some unions, and government programs like Trade Adjustment Assistance. It also mentions gives examples from programs in Sweden, Germany, Canada, South Korea. For example, here\’s a taste of what happens in Germany:

\”Germany has a model for promoting reemployment that privatizes public job placement services. In 2002, the German government introduced a “voucher” system. The system provides compensation to private job placement services that successfully find employment for displaced workers. All workers unemployed for three months or longer are eligible for the program, and the payment rate for successful placement increases with the duration of unemployment.

\”In the German program, individuals sign placement contracts with private agencies. If the agency finds the worker a job and an employment contract is signed, the agency can redeem the public voucher. Voucher payments are conditional on worker tenure in the new employment arrangement, and if the employment relationship lasts fewer than three months, the voucher payment must be refunded. Vouchers are more likely to be utilized by younger workers with higher skill levels and, overall, private job placement services were tapped by only 2 percent of job seekers once the program was fully implemented. …

\”In addition to the job placement voucher, Germany also utilizes a job training voucher system (Tergeist and Grubb 2006; Besharov and Call 2016). The purpose of this voucher is to provide individuals with the opportunity to choose which training program to participate in. However, eligible training programs must have a proven track record, usually a 70 percent success rate in placing individuals out of unemployment within six months after the end of the program. Strittmatter (2016) found that these training programs tend to have negative employment outcomes in the immediate term as individuals focus on their training and reduce time spent searching for a job. This negative effect can last for up to two years, likely due to the long duration of these vocational programs. But after four years, the voucher system exhibits clear gains: unemployed individuals who utilize the vocational training voucher experience a two percentage point increase in their employment probability when compared to individuals who did not participate such a training program.\”

The CEA notes a point I\’ve tried to make on this blog a few times over the years. Labor market policies can be \”passive,\” like paying benefits to the unemployed, or \”active,\” like assisting with job search and retraining. The US has traditionally spent far less on active labor market policies than other high-income countries (for more discussion, see here and here):

In a US economy where jobs and the skills needed to fill them are continually evolving, and where some of the necessary shifts can be wrenching and large, there is a case for thinking more systematically about job training throughout life. Individuals who need retraining face some practical problems: an information problem of what kind of training employers really want; an availability problem of how and where to get that training, and a financial problem of how to pay for it. Many employers do not necessarily view themselves as primarily in the training business. They want employees who are plug-and-play, ready to contribute on the first day. And employers worry that in an economy where employees shift between jobs, any efforts they make to train workers will only end up benefiting workers at their next employer. But programs for lifelong learning need input and support from employers, because ultimately employers are the ones who know what skills they want.

Moreover, the issue here is not just about helping those who are unemployed or between jobs. It\’s also about those who would prefer to add skills now, in the hope of gaining job security or getting a promotion from a current employer. Lifelong learning doesn\’t just happen during spells of unemployment.

I do not have a well-defined proposal to offer here. In political terms, it\’s interesting to see the White House economists raising these issues. But the Trump administration does not seem to be one where the economists have an especially loud voice in making policy.

Economics of Climate Change: Three Recent Takes

Most economists took their last course in physical science many years ago, back in college days, and lack any particular in-depth knowledge of how to model weather or climate.  But economists can contribute usefully to the climate change debate in other ways. At least some economists do have expertise in patterns of energy use, potential for substitution, and technology, and thus have something to say about likely future paths for the emissions of carbon (and other greenhouse gases), and what it might take to change these paths. And at least some economists have expertise in thinking about how changes to climate would affect economic and human outcomes, ranging from crop yields to human mortality. Here are a few recent examples.

Richard Schmalensee takes a usefully unsentimental look at the prospects for a genuinely dramatic reduction in carbon emissions in \”Handicapping the High-Stakes Race to Net-Zero,\” appearing in the Milken Institute Review (Third Quarter 2018). He emphasizes three main challenges:

1) Carbon emissions from emerging economies are rising rapidly, often based on building new plants that generate electricity from coal. Even if emissions in advanced economies were slashed dramatically, there won\’t be much progress on reducing global carbon emissions without tackling the issue in emerging economies. Schmalensee offers a scenario to illustrate this challenge:

\”[S]uppose that, in the next decade or two, advanced economy emissions are cut in half, there is no population growth in emerging economies and emerging-economy emissions per dollar of GDP are cut by 31 percent (to the advanced economy average). Suppose, too, that GDP per capita in emerging economies rises to only 45 percent of the advanced economy average (roughly double what it is today). In this optimistic case, global emissions would still rise by about 1 percent.\”

Schmalensee also believes that the most popular form of solar energy–that is, photovoltaic technology based on crystalline silicon–is unlikely ever to become cost-competitive with fossil fuels. So the dual challenge here is to find alternatives for low-carbon or carbon-free production of electricity, and then find ways for emerging market and low-income economies to afford the switch to these new technologies.

2) Most scenarios for decarbonization of energy put a high emphasis on use of solar and wind, which raise the challenge of how to build an electricity grid that relies on a intermittent source of energy. Schmalensee writes: 

\”The most mature, widely deployed carbon-free generation technologies are wind, solar, hydroelectric and nuclear. Political resistance in many nations to building more dams is substantial, as is resistance to nuclear plants using current-generation designs — though generation from both sources will no doubt expand in emerging economies. Other technologies that are potentially valuable in a carbon-constrained world — among them carbon capture and storage, biofuels, geothermal energy, nuclear fusion, waste-to-energy and wave power — are either untried, immature or only suitable for special locations.

\”Accordingly, in most deep decarbonization scenarios, wind and solar play leading roles in mid-century electricity supply. … But getting to net-zero seems likely to require going significantly beyond 50 percent wind and solar. The main problem is that wind and solar generation are intermittent, with output that is variable on time scales ranging from minutes to seasons, and imperfectly predictable. We know how to operate electric power systems with substantial intermittent generation at reasonable cost, as Germany and California have demonstrated. It is, however, almost universally agreed that we do not know how to operate systems dominated by intermittent generation at reasonable cost.\”

The solutions here could involve either developing cost-effective and carbon-free sources of electricity that are not intermittent (small nuclear reactors? carbon capture and storage?) or cost-effective methods for mass storage of energy (batteries?). All of these approaches need considerably more  research and development.

3) The main focus of decarbonization has been on production of electricity, but that\’s only one way in which humans produce and use energy. Schmalensee writes:

\”While decarbonizing electricity generation is a necessary step toward net-zero, electricity generation accounts for only about one-third of human-caused CO₂ emissions. Transportation accounts for another fifth — and while road transport (about 15 percent of total emissions) could be electrified at some cost, electrification of air transport seems highly unlikely. More importantly, little attention has been paid to reducing the substantial emissions from industry and construction (about 20 percent), land use (about 13 percent) and various other sources, including cement production and building heating (about 13 percent).\”

Thinking about emissions in all of these contexts, and how they occur everywhere in the world, is the actual challenge.

Schmalensee is willing to contemplate large resource expenditures to address these issues. He writes
\”In 1965 and 1966, NASA accounted for more than 4 percent of federal spending, which would translate to about $160 billion today. In contrast, the U.S. Department of Energy’s budget request for clean technology development in FY2017 was a paltry $9 billion.\” His deeper message is that if people are actually serious about the goal of substantial decarbonization of the global economy, announcing lofty goals won\’t suffice, and modest subsidies for existing technologies won\’t be nearly enough. A genuinely enormous commitment to change is needed.

Two recent studies by economists take a look at consequences of climate change. One group project from the Climate Impact Lab consortium, \”Valuing the Global Mortality Consequences of Climate Change Accounting for Adaptation Costs and Benefits\” was written by Tamma Carleton, Michael Delgado, Michael Greenstone, Trevor Houser, Solomon Hsiang, Andrew Hultgren, Amir Jina, Robert Kopp, Kelly McCusker, Ishan Nath, James Rising, Ashwin Rode, Samuel Seo, Justin Simcock, Arvid Viaene, Jiacan Yuan, and Alice Zhang (Becker Friedman Institute for Economists at the University of Chicago, Working Paper 2018-51, August 2018). They write:

\”[W]e estimate the mortality-temperature relationship around the world, both today and into the future. This is accomplished by using the most exhaustive dataset ever collected on annual, subnational mortality statistics. These data cover the universe of deaths from 41 countries totaling 56% of the global population at a resolution similar to that of US counties (2nd-administrative level) for each year across multiple age categories (i.e. 64). These data allow us to estimate the mortality-temperature relationship with substantially greater resolution and coverage of the human population than previous studies; the most comprehensive econometric analyses to date have been for a single country or individual cities from several countries. We find that in our sample an additional 35◦C day (-5◦C day), relative to a day at 20◦C, increases the annual all-age mortality rate by 0.4 (0.3) 2 deaths per 100,000.\”

This data allows them to look at mortality risks accounting for different age groups, different locations within countries, and different per capita income across countries. This framework also allows them to infer what kinds of adaptations that people can make to higher temperatures. They write:

The examples of Seattle, WA and Houston, TX, which have similar income levels, institutions, and other factors, but have very different climates, provide some high-level intuition for our approach. On average Seattle has just 0.001 days per year where the average temperature exceeds ≈32◦C, while Houston experiences 0.31 of these days annually. Houston has adapted to this hotter climate, evidenced by the fact that a day above 32◦C produces 1/40th of the excess mortality in Houston than it does in Seattle (Barreca et al., 2016). … Indeed, the difference in air conditioning penetration rates, which were 27% in Washington state and 100% in Texas as of 2000-4, provide evidence that the observed differences in temperature sensitivities between these cities reflect cost-benefit decisions. 

This working paper will be hard going for those not initiated into economic research, and the results aren\’t simple to summarize. But the authors put it this way (citations omitted):

\”Together, these two features of the analysis allow us to develop measures of the full mortality-related costs of climate change for the entire world, reflecting both the direct mortality costs (accounting for adaptation) and all adaptation costs. We find that the median estimate of the total mortality burden of climate change across 33 different climate models is projected to be worth 36 death equivalents per 100,000 at the end of the century or roughly 3.7% of global GDP when using standard assumptions about the value of a statistical life. Approximately 2/3 of the death equivalent costs are due to the costs of adaptation. Further, failing to account for income and climate adaptation as has been the norm in the literature would overstate the mortality costs of climate change by a factor of about 3.5. Finally, we note that there is evidence of substantial heterogeneity in impacts around the globe; at the end of the century we project an increase of about 3,800 death equivalents annually in Mogadishu and a decrease of about 1,100 annually in Oslo, Norway.\”

In another recent study,  Riccardo Colacito, Bridget Hoffmann, Toan Phan, and Tim Sablik look at \”The Impact of Higher Temperatures on Economic Growth\” (Federal Reserve Bank of Richmond, Economic Brief EB18-08, August 2018). A general finding in the climate change literature is that warmer temperatures would have less effect on the US economy, in part because agriculture and other obviously weather-dependent industries are a relatively small share of the US economy, and in part because the US economy has considerable resources for adaptation.

However, this paper points out that in hot summers, lots of US industries see a decline. For example, the real estate industry does less well in exceptionally hot summers–maybe because people are less enthusiastic about shopping for homes or moving when it\’s very hot. The insurance industry does less well in hot summers, in part because extreme heat pushes up medical costs and reduces profits for insurance firms. Other studies have found that very high summer temperatures are associated with lower production at automobile plants. In addition, these effects of  hotter summers on reduced output seem to be getting larger, rather than smaller over the last four decades.

This study is based on variation across seasons and years, and it doesn\’t take into account the kinds of adaptations that might occur in the longer run, so using it to project decades into the future seems like a stretch to me. But adaptations to higher temperatures often have substantial costs, too. Overall, this study, together with the previous estimates about costs of mortaility and adaptation, serve as a useful warning that higher temperatures and climate change aren\’t just about farming.

Should Professors Share Returns from Innovation with their Employers?

When a professor working at a university or college develops has an innovation that may lead to a new product or a new company, who should own the intellectual property? The professor? The university? Some mixture of the two? 

On one side, one can argue that giving the professor most or all of the ownership of intellectual property–sometimes known as the \”professor\’s privilege\”–will encourage that person to develop marketable ideas. On the other side, one can argue that if a university has a financial interest in professors that develop new ideas, the university is more likely to structure itself–including the expectations about allocation of time for faculty and graduate students and its investments in equipment and buildings–in a way that leads to more overall innovation. The common over time, since about 1980, has been a reduced emphasis on incentives for professors to innovation and an increase emphasis on incentives for universities to support innovation. The United States switched to this model back around 1980, and many western European countries have followed suit since then.

Hans K. Hvide and Benjamin F. Jones present evidence from Norway suggesting that this shift may have been a mistake, in their paper \”University Innovation and the Professor’s Privilege\” (American Economic Review, July 2018, 108(7): 1860–1898, seems freely available at present, or you can do an internet search to find pre-publication versions on the web).  They write (citations omitted):

\”The setting is Norway, which in 2003 ended the “professor’s privilege,” by which university researchers had previously enjoyed full rights to new business ventures and intellectual property they created. The new policy transferred two-thirds of these rights to the universities themselves, creating a policy regime like that which typically prevails in the United States and many other countries today. In addition to the policy experiment, Norway also provides unusual data opportunities. Registry data allow us to identify all start-ups in the economy, including those founded by university researchers. We can also link university researchers to their patents. We are thus able to study the reform’s effects on both new venture and patenting channels. 

\”Inspired partly by a belief that US universities are more successful at commercial innovation, many European countries have enacted laws in the last 15 years that substantially altered the rights to university-based innovations. In Germany, Austria, Denmark, Finland, and Norway, new laws ended the so-called “professor’s privilege.” Recognizing potential complementarities between institution-level and researcher-level investments, the new laws sought to enhance university incentives to support commercialization activity, including through the establishment of technology transfer offices (TTOs). However, while these reforms may have encouraged university-level investment, they also sharply increased the effective tax rate on university-based innovators, leaving the effect of such reforms theoretically ambiguous. Broadly, these national systems moved from an environment where university researchers had full property rights to a system that looks much like the US system today (since the 1980 US Bayh-Dole Act), where the innovator typically holds a minority of the rights, often one-third, and the university holds the remainder. …

\”Our primary empirical finding is that the shift in rights from researcher to university led to an approximate 50 percent drop in the rate of start-ups by university researchers. This drop appears (i) in a simple pre-post analysis of university start-up rates, (ii) when compared to background rates of start-ups in Norway, and (iii) when analyzed at the level of the individual Norwegian citizen, controlling for fixed and time-varying individual-level characteristics. We further find that university researchers substantially curtailed their patenting after the reform, with patent rates falling by broadly similar magnitudes as seen with start-ups. In addition to these effects on the quantity of innovative output, we find evidence for decreased quality of both start-ups and patents, where, for example, university start-ups exhibit less growth and university patents receive fewer citations after the reform, compared to controls. Overall, the reform appeared to have the opposite effect as intended.\” 

Of course, universities are a powerful institutional lobby in favor of the idea that they should receive a share of rewards from innovations generated by their faculty.  From a university\’s point of view, it is preferable to receive two-thirds of the returns from a level of innovation that is 50% lower, while from society\’s view it is preferable that innovation be twice as high–no matter who gets the returns.

Of course, the Norwegian evidence from Hvide and Jones cannot be applied directly to the experience of the US or to other countries in western Europe. But at least in the short evidence review from Hvide and Jones, the evidence that ending the \”professor\’s privilege\” would increase innovation seems weak. Yes, the passage of the Bayh-Dole act back in 1980 was followed by more patents going to universities, but it\’s not at all clear that it led to more university-based innovation for the US economy as a whole.  For evidence from other countries, the authors write: \”In contemporaneous studies of the professor’s privilege, Czarnitzki et al. (2015) find a decline in university patenting in Germany after the reform there, while Astebro et al. (2015) find lower rates of PhDs leaving universities to start companies in the United States than in Sweden, which has maintained its professor’s privilege.\”

I\’m sure some American universities do a better job of supporting innovative professors than others. But I\’ve also heard a fair number of horror stories from professors where their institution was so insistent about following its own procedures for how an innovation would process and so concerned about the university getting a cut that it became a genuine intrusion and hindrance to the process of innovation. Perhaps it\’s time for some rethinking the extent to which universities, or the professors at universities, should be viewed as the engines of innovation.

A Primer on the Jones Act and American Shipping

The Jones Act pops into public consciousness every few years, perhaps most recently in fall 2017 when President Trump suspended the law for 10 days to help hurricane assistance in Puerto Rico. Colin Grabow, Inu Manak, and Daniel Ikenson offer background on the law and make the case for its repeal in in \”The Jones Act: A Burden America Can No Longer Bear\” (Cato Institute Policy Analysis #845, June 28, 2018). They begin:

\”For nearly 100 years, a federal law known as the Jones Act has restricted water transportation of cargo between U.S. ports to ships that are U.S.-owned, U.S.-crewed, U.S.-registered, and U.S.-built. Justified on national security grounds as a means to bolster the U.S. maritime industry, the unsurprising result of this law has been to impose significant costs on the U.S. economy while providing few of the promised benefits. … While the law’s most direct consequence is to raise transportation costs, which are passed down through supply chains and ultimately reflected in higher retail prices, it generates enormous collateral damage through excessive wear and tear on the country’s infrastructure, time wasted in traffic congestion, and the accumulated health and environmental toll caused by unnecessary carbon emissions and hazardous material spills from trucks and trains. Meanwhile, closer scrutiny finds the law’s national security justification to be unmoored from modern military and technological realities.\”

Thus, the Jones Act made it illegal for any ships that were not \”U.S.-owned, U.S.-crewed, U.S.-registered, and U.S.-built\” to deliver goods to Puerto Rico after it was pounded by Hurricane Maria.

When thinking about the costs of the Jones Act, it\’s worth remembering that shipbuilding and shipping are examples of US industries that have been dramatically protected from foreign competition for nearly a century. If susttained protection from foreign competition was a useful path to the highest levels of efficiency and cost-effectiveness, then US ship-building and shipping should be elite industries. But in fact, US ship-building and shipping–safely protected from competition– have fallen far behind foreign competition, with negative costs and consequences that echo through the rest of the US economy–and probably diminish US national security, too.

As a starting point, less competition means less pressure to seek out efficiency gains. After nearly a century of protection from foreign competition, costs of ship-building in the US are far above the international competition.

\”American-built coastal and feeder ships cost between $190 and $250 million, whereas the cost to build a similar vessel in a foreign shipyard is about $30 million. Accordingly, U.S. shippers buy fewer ships, U.S. shipyards build fewer ships, and merchant mariners have fewer employment opportunities to serve as crew on those nonexistent ships. Meanwhile, facing exorbitant replacement costs, ship owners are compelled to squeeze as much life as possible out of their existing vessels.  … The typical  economically useful life of a ship is 20 years. Yet three of every four U.S. container ships are more than 20 years old and 65 percent are more than 30 years old. … These increasingly decrepit vessels are not only inefficient, but dangerous…. [Oil] tanker ships manufactured in the United States cost about four times more than their foreign-built counterparts … 
\”Absent competitive forces, the U.S. shipbuilding industry has not felt compelled to evolve and similarly find its own competitive niche. Instead, it produces numerous types of vessels for which it possesses no particular advantages compared to foreign sources, and at a much higher cost. … This mediocrity is further confirmed by the absence of foreign demand for U.S. ships. Exports from the sector, including repair services, accounted for a mere 4.6 percent of the industry’s revenue in 2014.\”
The cost of shipping between US ports, including both the cost of ships themselves and the other costs, are also far above international levels.

\”The most obvious and direct effect of the Jones Act is on waterborne shipping rates. By limiting participation in the U.S. maritime and inland waterways transportation sector to U.S.-built, U.S.-owned, U.S.-flagged, and U.S.-crewed ships, the costs of moving cargo by water are artificially inflated. …  To get a sense of the inefficiencies, a Maritime Administration report found that the operating costs of U.S.-flagged vessels engaged in foreign commerce in 2010 were 2.7 times greater than those of their foreign competitors. …

\”For reference, within the continental United States, moving crude oil from the Gulf Coast to the Northeast on a Jones Act tanker costs $5 to $6 per barrel, but only $2 per barrel when it is shipped from the Gulf Coast to Eastern Canada on a foreign-flagged vessel. … The Jones Act also explains the seemingly curious sourcing decisions for other commodities, such as rock salt. Maryland and Virginia, for example, obtain the product for wintertime use from distant Chile instead of domestically, despite the United States being the world’s largest producer of that commodity.\”

With limited domestic demand for new ships, either from domestic or foreign sources, because of the high costs of ships and shipping, the US shipbuilding industry has over time become tiny compared to its international competitors. Indeed, US ship-building has become heavily reliant on the defense purchase: \”Nearly two-thirds (98 of 150) of new large, deep-draft vessel orders in 2014 came from the military, which accounted for 70 percent of the shipbuilding and ship-repairing industries’ revenues in 2014 and 2015. …

\”In 2015 the Maritime Administration listed the number of active shipyards at 124 but also pointed out that, of those, only 22 are `mid-sized to large shipyards capable of building naval ships and submarines, oceangoing cargo ships, drilling rigs and high-value, high-complexity mid-sized vessels.\’ This pales in comparison to shipyards in Asia. Japan, for instance, currently has more than 1,000 shipyards, and it is estimated that China has more than 2,000. There are also only 7 active major shipbuilding yards in the United States, as compared to roughly 60 major shipyards in Europe (major shipyards are defined as those producing ships longer than 150 meters). Table 1 presents the top 10 countries for the total number of ships built in gross tons during 2014–2016. At under 1 million gross tons, U.S. shipbuilders’ output was less than 1 percent of China’s and Korea’s shipbuilders.\”

Also, with high costs of ships and shipping, the amount of freight that travels by water is low in the US, and declining, while in other places is it much higher and rising.

\”Although 38 states and the District of Columbia are connected by navigable waterways and marine highways, and nearly 40 percent of the U.S. population lives in coastal counties, coastal shipping of cargo between U.S. ports in the Lower 48 states  comprises a negligible 2 percent of domestic freight.  … In the European Union, where cabotage among the member states is permitted, the corresponding figure is 40 percent. In Australia, where vessels need not be built domestically to participate in cabotage services, coastal shipping accounts for 15 percent of domestic freight. Meanwhile, after relaxing its cabotage restrictions in 1994, New Zealand experienced a decrease of approximately 20–25 percent in coastal freight rates over the subsequent six years.\”

Unsurprisingly, the high cost of shipping by water means that in the US, freight is instead shipped overland . Consider, for example, all the trucks and trains that run up and down the east coast or the west coast. 

\”[I]n the continental United States, businesses have alternatives to waterborne transportation. And the data show that the amount of U.S. cargo shipped along the Atlantic coast, Pacific coast, and Great Lakes today is about half the volume of the cargo shipped that way in 1960, despite the economy’s considerable growth in the intervening years. Over the same period, railroads have increased their transport volume by about 50 percent and intercity trucks have increased their freight by more than 200 percent. To confirm that waterborne shipping at market rates didn’t lose its appeal, river barges and coastal ships linking the United States with Canada and Mexico experienced growth in their freight tonnage of more than 300 percent over the same period. … While the Jones Act reduced the supply of ships and drove up the costs of waterborne shipping, it increased demand for road transport, presumably driving up the prices of trucking and rail. …

\”[A]ccording to the Congressional Research Service, “some of the most congested truck routes, such as Interstate 95 in the East and Interstate 5 in the West, run parallel to coastal shipping routes, and water shipment through the Saint Lawrence Seaway and the Great Lakes has the potential to relieve pressure on major east–west highways, pipelines, and railroads in the Midwest.”\”

This shift away from water-based transportation to overland road and rail has a variety of costs, like greater congestion and wear-and-tear on the roads. It also has environmental costs like higher carbon emissions:

\”According to the World Shipping Council, maritime shipping `is the world’s most carbon-efficient form of transporting goods—far more efficient than road or air transport.\’ Maritime shipping produces approximately 10–40 grams of carbon dioxide to carry one ton of cargo one kilometer. In contrast, rail transport produces 20–150 grams, and trucking—whose tonnage is forecast to grow 44 percent by 2045 according to the Department of Transportation—produces  60–150 grams. \”

The argument a century ago, and since, has been that a domestic ship-building industry is essential for national defense. Maybe so! But if that is the goal, the Jones Act is sorely failing to accomplish it. Instead, the Navy can\’t afford the extra ships it wants, the number of available US civilian ships and the knowledgeable workers to run them is shrinking, and military operations have had to find ways to make use of foreign ships. Some anecdotes drive home the point: 

\”When U.S. forces were deployed to Saudi Arabia during Operations Desert Shield and Desert Storm, a much larger share of their equipment and supplies was carried by foreign-flagged vessels (26.6 percent) than U.S.-flagged commercial vessels (12.7 percent). Only one U.S.-flagged ship was Jones Act compliant. In fact, the shipping situation was so desperate that on two occasions the United States requested transport ships from the Soviet Union and was rejected both times. … At the time, Vice Admiral Paul Butcher, who was then deputy commander of the U.S. Transportation Command, remarked that without the availability of foreign-flag sealift, `It would have taken us three more months to complete the sealift ourselves.\’ … 

\”Of the 46 ships comprising the Maritime Administration’s Ready Reserve Force—a fleet that helps transport combat equipment and supplies `during the critical surge period before commercial ships can be marshaled\’—30 are foreign-built. Although worthy to serve in the country’s defense, these same ships are ineligible to engage in coastwise trade.\”

As a general rule, it is unlikely that the solution for a problem is identical to the cause of the problem. But after nearly a century of protection from international competition sheltered  US ship-building and shipping to compete with foreign competition and thus led it into near-obsolescence, the reason for keeping the Jones Act in place seems to be that, without it, the US shipping and ship-building industry would have a hard time competing. It\’s a little like arguing that the cure for a drug addition is a continuing supply of the drug to which you are addicted.

I\’m willing to have a discussion about what policy steps might be useful in creating a US ship-building and shipping industry that is internationally competitive. The necessary steps might be dramatic and costly. But the first step in that discussion is the acknowledgement that the long-run effects of the Jones Act have been terrible and counterproductive policy for the US shipbuilding and shipping industries. It has rendered those industries essentially unable to compete on the world stage, while creating costs throughout the rest of the US economy and reducing US military security. Any plan for US shipbuilding and shipping which doesn\’t focus on how to bring the Jones Act to an end is not serious.

What Should an Economics Research Article Look Like?

The shape of an economics research article is changing: much longer, and with more co-authors,
David Card and Stefano DellaVigna documented these patterns in \”Nine Facts about Top Journals in Economics,\” back in the March 2013 in the Journal of Economic Literature (51:1, pp. 144–161, freely available here, or with a subscription here). They have now updated some of those facts with five years of additional data in  “Update to ‘Nine Facts about Top Journals in Economics.’

The figure below shows two of the changes, using data from five of the most highly prominent research journals in economics (namely, the American Economic Review, Econometrica, the Journal of Political Economy, the Quarterly Journal of Economics, and the Review of Economic Studies). the blue line shows trends in the average page length of an economics article: roughly a tripling in length form 1980 up to the present. The yellow line shows a change in the number of co-authors. Back in the 1980s, the average paper had 1.5 co-authors; now, it\’s about 2.5 co-authors.

Lots of hypotheses have been proposed to explain these patterns.

  • Greater paper length may result from referees and journal editors who have become more likely to demand additional details and background, including an overview of past literature, both a theory and an empirical section in every paper, and requests to spell out and respond to alternative hypotheses and counterarguments.
  • Greater length may also result from hypersensitive authors who feel as if they simply cannot bear to go into print without explicitly guarding themselves by responding to every comment they have ever received or countering every criticism they have ever heard when presenting their paper at seminars.
  • The rise of online publication means that costs of paper and postage mean less, so physical weight matters less, and the monetary costs of longer papers to publishers are less salient. 
  • Some journals charge libraries a subscription fee by the number of pages downloaded, and thus have an incentive for longer articles. 
  • Empirical work has become more prominent, and in some cases more complex, encouraging both greater length and also more co-authors to deal with different parts of the analysis. 
  • Co-authoring several papers may be a safer way for a young professor to build up a portfolio of publications rather than focusing on individually authored papers. 
  • The rise of computing power and the internet has made it easier for co-authors to work together even if they are not physically located at the same place.

I\’m sure there are additional possible causes. But from my perch as Managing Editor of the Journal of Economic Perspectives for the last 32 years, here are a couple of reactions.

At my own journal, many of our authors send along first drafts that are longer than we want. Often the additional length is modest–maybe 10-20% over. But it\’s common for us to receive drafts that are 50% longer than we want, and at least a few times each year we receive first drafts that are more than double our requested length. A lot of my job as an editor is to hold the line on length: in fact, as parta of the refereeing process, I do a hands-on edit of articles and actually cut down the length. We then ask the authors to work with that shortened draft, or at least to take it into account, when revising in response to substantive comments.

What stops other journals from setting a hard length limit and sticking to it? One plausible answer is that any journal which tries to enforce such length limits may well find that submissions migrate to other journals without such a length limit. Another answer is that it can be a time-consuming and painful job to trim down length substantially. It\’s a full-time job for me. And trimming one\’s own writing can be especially painful. Lots of journal editors just don\’t want to bang heads with authors over length issues.

But when the average length of an article triples, it raises more fundamental questions about the purpose of an academic article. (And it\’s worth remembering that this is the length of the published article, not including appendix materials typically published online, which seem to me to have exploded in length as well.)

For example, is a research article actually meant to be read, word-for-word, beginning to end? It seems plausible that if articles have become three times as long, then students and researchers living in a world with just 24 hours in a day can read only about one-third as many articles. My guess is that most articles are not actually meant to be read, but only to be skimmed. Perhaps skimming a 48-page article doesn\’t take three times as long as skimming the 16-page articles of several decades ago. But it probably takes a even the experts a little longer.

At a deeper level, is a research article meant to describe what a researcher has most recently found? If so, then current articles are burying their central purpose in waves of other material. David Autor, who was my boss as editor of JEP some years back, recently said that reading a 94-page draft of a paper was “like being bludgeoned to death with a nerf bat.”

Alternatively, is a research article meant to be a full record of how the subject at hand stands at a certain point in time: that is, background context, previous findings, theoretical framework, mixture of new empirical or theoretical findings, concerns and questions about the new findings, all surrounded with side avenues and implications for future research? If that\’s the goal, then I suspect the length of papers is just going to keep drifting higher, and those who read (or skim) papers are just going to have a harder and harder time.

From the perspective of a professional editor, like me, a lot of the length in academic articles is just flabby writing, not an excess of perhaps useful background and details. But the tripling of length in academic journal article is not mostly about flabby writing. Instead, this extreme growth of length is a norm that has evolved, a norm that\’s bad for readers of research articles and makes light-touch skimming a survival strategy–and thus not well-suited for disseminating actual research findings.

Some Facts on Global Current Account Balances

I\’m the sort of joyless and soul-killing conversationalist who likes to use facts as the background for arguments. In that spirit, here\’s an overview of some facts about global trade balances, taken from the IMF External Sector Report: Tackling Global Imbalances and Rising Trade Tensions (July 2018).

Here\’s a list of the 15 countries with the largest trade surpluses and deficits, as measured by the current account balance. It also shows these magnitudes as a share of world GDP and a share of the country\’s GDP (for the record, I\’ve edited the table by cutting out the columns for 2014-2016).

A few facts jump out at me:

1) The US has the largest trade deficit in the world in absolute terms. However, trade deficits are larger as a share of the national economy in a number of other countries, including the UK, Canada, Turkey, Argentina, Algeria, Egypt, Lebanon, Pakistan, and Oman.

2) Germany has by far the largest trade surplus in the world in absolute terms. Indeed, the trade surplus for the euro-area as a whole (not shown in the table) is $442 billion–very similar in size to the US trade deficit.

3) China, which seems to be arch-enemy #1 for trade at present , is third-highest in absolute size of trade surplus, well behind Germany and Japan. Measured as a share of national GDP, China\’s trade surplus is actually the smallest of the top 15 trade surplus countries listed here.

4) If you subscribe to the economically illiterate view that trade surpluses are a measure of a nefarious ability to trade unfairly and exploit the rest of the world, while trade deficits are a sign of victimization by the beliefs of naive and overly trusting free trade fanatics, you need to match those beliefs to the national patterns shown here. That is, you need to believe that the 15 countries at the top are unfairly hustling the rest of the world economy, while the 15 at the bottom are paying the price.

5) The IMF report also emphasizes that there was a major shift in the configuration of global trade balances back around 2013, which has continued since then: trade surpluses and deficits are more concentrated in advanced economies, and less in the rest of the world economy.

\”Global surpluses and deficits have become increasingly concentrated in AEs [advanced economies], as China and oil exporters have seen their current account surpluses narrow and the deficits of some EMDEs [emerging market and developing economies] (for example, Brazil, India, Indonesia, Mexico, South Africa) have shrunk. Key drivers of this reconfiguration were the sharp drop earlier this decade in oil prices, which have recovered somewhat after bottoming out in 2016, and the gradual tightening of global financing conditions reflecting prospects for monetary policy normalization in the United States. Also at work have been asymmetries in demand recovery and the associated policy responses in systemic economies … After 2013, higher or persistently large surpluses in key advanced economies (for example, Germany, Japan, the Netherlands) were underpinned by relatively weaker domestic demand, constrained by fiscal consolidation efforts—necessary in some cases, given compressed fiscal space. Meanwhile, higher or persistent current account deficits in other AEs (United Kingdom, United States) reflected a stronger recovery in domestic demand, supported by some recent fiscal easing. Meanwhile, the narrowing of China’s underlying current account surplus was supported by a marked relaxation of fiscal and credit policies, masking lingering structural problems and causing a buildup of domestic vulnerabilities. These asymmetries in demand strength have also led to differences in monetary policy (as seen by the evolution of longer term nominal bond yields) and currencies.\”

In passing, it\’s worth notice that the IMF economists explain these shifts in current account surpluses and deficits without reference to trade becoming more or less fair–which makes sense, because there were no major changes in the rules over this time. Instead, they focus on demand in different countries, along with fiscal and monetary policy choices.

In fact, China\’s current account deficit has dropped dramatically in the last decade, from about 10% of GDP in 2007 to 1.4% in the table above. The IMF writes about China:

\”The CA [current account] surplus continued to decline, reaching 1.4 percent of GDP in 2017 … about 0.4 percentage points lower than in 2016. This mainly reflects a shrinking trade balance (driven by high import volume growth), notwithstanding REER [real effective exchange rate] depreciation. Viewed from a longer perspective, the CA surplus declined substantially relative to the peak of about 10 percent of GDP in 2007, reflecting strong investment growth, REER appreciation, weak demand in major advanced economies, and, more recently, a widening of the services deficit …\”

Conversely, the US current account trade deficit has declined from about 6% of GDP back in 2006 down to about 2.5% of GDP since 2014.

It is of course not a coincidence that the peak of China\’s trade surpluses coincides with the trough of US trade deficits, back around 2006-2007. China\’s exports and trade surplus exploded after China entered the World Trade Organization in 2001, much faster than anyone (including China\’s government) expected. China\’s exports of goods and services were 20.3% of China\’s GDP in 2001, and then took off to hit 36% of GDP in 2006, but since have fallen back to 19.7% of China\’s GDP in 2017. Conversely, the US economy was inhaling imports during its credit-led housing boom back in about 2005-2006.

Maybe there was a case for seeking to limit disruption from China\’s exports during the \”China shock\” period from 2001-2007 or so, when China\’s exports and trade surplus exploded in size. But it\’s now a decade later. And both China\’s trade surpluses and America\’s trade deficits have dramatically declined during that decade, well before any shots were fired in President Trump\’s trade war.

The Emergence and Erosion of the Retail Sales Tax

About 160 countries around the world, including all the other high-income countries of the world, use a value-added tax. The US has no value added tax, but 45 states and several thousand cities, use a sales tax as an alternative method of taxing consumption.  John L. Mikesell and Sharon N. Kioko provide a useful overview of the issues in \”The Retail Sales Tax in a New Economy,\” written for the 7th Annual Municipal Finance Conference, which was held on July 16-17, 2018, at the Brookings Institution.  Video of the conference presentation of the paper, with comments and discussion, is available here.

Here\’s a short summary of the emergence and erosion of the retail sales tax (footnotes omitted):

\”The American retail sales tax emerged from a desperation experiment in Mississippi in the midst of the Great Depression. Revenue from the property tax, the largest single source of state tax revenue at the time, collapsed, falling by 11.4 percent from 1927 to 1932 and by another 16.8 percent from 1932 to 1934. State revenue could not cover their service obligations or provide expected assistance to local governments. Mississippi (followed by West Virginia) showed that retail sales taxes could produce immediate cash collections, even in low-income jurisdictions. Other states paid attention. In 1933, eleven other states adopted the tax (two let the tax expire almost immediately). By 1938, twenty-two states (plus Hawaii, not yet a state) were collecting the tax; six others had also imposed the tax for a short time but had let them expire. …

\”The national total retail sales tax collections exceeded the collections from every other state tax from 1947 through 2001. It was also the largest tax producer in 2003 and 2004 also (years in which individual income tax revenue was still impacted by the 2001 recession), but it was surpassed by state individual income tax revenues in other years since 2001. ,,, By fiscal 2016, total state individual income tax collections exceeded $345 billion, compared to over $288 billion for state retail sales taxes. However, those national totals conceal the continuing dominance of the retail sales taxes in a number of states …

A major and ongoing US sales taxes is that, from the start, they mostly did not cover services. Thus, as the US has shifted to a service-based economy, the amount of consumer spending doing to goods covered by the sales tax has diminished. As the base of the sales tax diminished, then states have gradually raised the rate of the sale tax so that it would bring in a similar proportion of overall state revenue. This dynamic of higher sales tax rates imposed on a shrinking base is not sensible.
looking only at the 45 states with sales taxes.

\”[Here is] the history of mean retail sales tax breadth (implicit tax base / state personal income) across the states from 1970 to 2016. The record is one of almost constant decline, from 49.0 percent in 1970 to 37.3 percent in 2016. … The typical state retail sales tax base has narrowed as a share of the economy of the state over the years and this has meant that, in order for states to maintain the place of their sales tax in their revenue systems, they have been required to gradually increase the statutory tax rate they apply to that base. … [L]ittle good can be said about a narrow base / high statutory rate revenue policy. …

\”Unfortunately, many states got off to a bad start when they initially adopted their sales taxes and excluded all or almost all household service purchases from the tax base and it has proven to be difficult to correct that initial error. Extending the retail sales tax to include at least some services is a perennial topic whenever states are seeking additional revenue or considering reforms in their tax systems. … While the current typical sales tax base is around 20 percent narrower in 2016 compared with 1970, the base with all services added is actually about 11 percent broader, and the base without health care and education services is only 8 percent below its 1970 level. …\”

Another perennial sales tax issue is that legislatures like to list items that will be exempt from the sales tax, or tax \”holidays\” where sales tax doesn\’t need to be paid during certain time periods on items or like back-to-school items, energy-saving appliances, emergency preparedness supplies, and other items. These policies are often justified as helping those with low incomes, but any policy which cuts taxes for 100% of the population in the name of helping the 15-20% of the population that is poor has a mismatch between its stated intentions and its reality. Several states have taken a much more sensible course: if the goal is to help poor people, then give poor people a tax credit, based on their income, so that sales taxes they pay can be rebated to them. Mikesell and Kioko write:

\”The problems with [a sales tax] exemption are well-known – absence of targeting, high revenue loss, additional cost of compliance and administration, distortion of consumer behavior, reward for political support, etc. – and it is particularly distressing in light of the fact that the credit/rebate system normally operated through the state income tax provides an alternative relief approach that eliminates virtually all these difficulties. Currently five states (Maine, Kansas, Oklahoma, Idaho, and Hawaii – operate some form of sales tax credit that returns to families some or all of sales tax paid on purchases, giving greatest relative relief to lowest income families and lesser (or no) relief to more affluent families. … The credit / rebate system promises efficiency, equity, and less revenue loss. Its apparent unpopularity is somewhat surprising, particularly in light of the spread of the earned income tax credit program, a program with some similar characteristics.\”

Yet another perennial sales tax issue is that the logic of the tax is that it should apply to goods and services purchased by households, not to business purchases. If a sales tax is applied to business purchases, it raises a risk of \”pyramiding,\” where one business pays sales tax on equipment and supplies from another business, and the consumers also pay sales tax on the finished product. If there are layers of businesses buying from each other along the supply chain, the sales tax can be imposed on a given product multiple times. Again, Mikesell and Kioko write:

\”American retail sales taxes have not entirely gotten over the confusion that the tax is not on finished goods but rather should be on goods (and services) purchased for household consumption. The reality of sales taxation is that a considerable share of the overall sales tax base, roughly 40 percent on average, consists of input purchases by businesses. The tax on those purchases embeds in prices charged by those businesses, meaning that this share of the tax is effectively hidden from households, allowing legislatures to claim a statutory rate that is considerably less than the true rate borne by individuals. … The pattern does show a considerable movement toward removal of these business input purchases from the tax base, thus reducing the prospects for pyramiding, hidden tax burdens, distortions, and discrimination. However, states continue to tax purchases made by other business activities. Lawmakers are inclined to try to pick favorites for tax relief and appear to like glitz. Targeted preferences for motion pictures, certain sorts of research and development, or bids for the Super Bowl are attractive to politicians because they provide identifiable credit and possibly ribbon cutting not available with general exemption. Super Bowl bids are particularly egregious.\”

A more recent issue is how jurisdictions with a sales tax can react to the rise of online sales from other jurisdictions. There seem to be several models developing. First, there is a \”South Dakota\” model of collecting sales tax from companies physically located in other states if they have total sales above a certain minimum level to South Dakota residents. The US Supreme Court upheld this law as constitutional this summer.  Other states that have adopted this model include Indiana, Iowa, North Dakota, Massachusetts, Maine, Mississippi, Wyoming, and Alabama.

An alternative \”Colorado\” model require sellers in a different state to notify both Colorado buyers and the Colorado tax authorities that state sales tax was due–but did not seek to collect the sales tax from those out-of-state sellers.  Other states that have enacted this approach are Louisiana, Pennsylvania, Vermont, and Washington.

Yet another approach addresses the question of when the producer is in one state, the buyer is in another state, and the \”market facilitator\” through which an online transaction is carried out is in still another state. This approach seeks to make the market facilitator based in one state responsible for collecting sales taxes on behalf of other states. Alabama, Arizona, Oklahoma, Pennsylvania, Rhode Island, Washington, and Minnesota have taken this approach.

The lurking difficulty with the lower base and higher rates for the retail sales tax is that, at some point, the tax rate gets  high enough that it becomes lucrative to find ways to avoid paying it.

\”The problem is that there has been a consensus, heavily based on pre-value-added tax experience in Scandinavian countries with high-rate retail sales taxes, that retail sales tax rates much above 10 percent are likely to produce compliance issues so difficult that the tax becomes almost impossible to administer. American retail sales tax rates are drifting ever closer to that danger level, particularly when local governments add their own rates to the rate levied by the state. … [S]tate statutory rates have drifted upward since 1970. Rates of 6 and 7 percent are no longer rare and a narrowing base will require more rate increases if the position of the sales tax is to be preserved (or expanded) in state revenue systems. Rates are moving toward the danger zone in which significant non-compliance begins to become more attractive and, unless states can manage the narrowing base problem, a compliance gap may become a significant challenge for state tax administrators in the first part of the 21st century.\”

Outside the US, where value-added taxes are high, there has been a spread of what is called \”sales suppression software,\” under names like \”phantomware\” and \”zappers.\” Basically, this software cooks the accounting books to make sales look lower, either by substituting lower prices for the higher price that was actually charged, or by reducing the number of transactions. This software takes care of other issues too, by producing fake inventory records if needed, or by running certain transactions through international cloud-based services that will be more difficult to track. Tax administration has become increasingly based on electronic records, so sales suppression software may turn into a real problem.

Albert Jay Nock on the Three Rules of Editorial Policy

For 31 years, I\’ve been editing the Journal of Economic Perspectives. At the most basic level, editing is about pushing the author to have a point in the first place, and to make it clearly. Sounds simple, perhaps? On complex subjects, meeting those criteria can be a high hurdle to cross. 
Albert Jay Nock, in his 1943 Memoirs of a Superfluous Man, tells a story along these lines in his description of the editorial policy at a magazine he had edited called The Freeman (p. 172):

\”In one way, our editorial policy was extremely easy-going, and in another way it was unbending as a ramrod. I can explain this best by an anecdote. One day Miss X steered in a charming young man who wanted to write for us. I took a liking to him at once, and kept him chatting for quite a while. When we came down to business, he diffidently asked what our policy was, and did we have any untouchable sacred cows. I said we certainly had, we had three of them, as untouchable and sacred as the Ark of the Covenant. He looked a bit flustered and asked what they were. 

\”The first one,\” I said, \”is that you must have a point. Second, you must make it out. The third one is that you must make it out in eighteen-carat, impeccable, idiomatic English.\”

\”But is that all?\” 

\”Isn\’t it enough for you?\” 

\’Why, yes, I suppose so, but I mean, is that all the editorial policy you have?\” 

\”As far as I know, it is,\” I said, rising. \”Now you run along home and write us a nice piece on the irremissibility of postbaptismal sin, and if you can put it over those three jumps, you will see it in print. Or if you would rather do something on a national policy of strangling all the girl-babies at birth, you might do that—glad to have it.\” 

The young man grinned and shook hands warmly. We got splendid work out of him. As a matter of fact, at one time or another we printed quite a bit of stuff that none of us believed in, but it all conformed to our three conditions, it was respectable and worth consideration. Ours was old-school editing, no doubt, but in my poor judgement it made a far better paper than more stringent methods have produced in my time.

I especially like the comment in the closing paragraph about how they \”printed quite a bit of stuff that none of us believed in.\”  For editors, agreeing with authors is overrated and in fact unnecessary.

Summer 2018 Journal of Economic Perspectives Available On-line

I was hired back in 1986 to be the Managing Editor for a new academic economics journal, at the time unnamed, but which soon launched as the Journal of Economic Perspectives. The JEP is published by the American Economic Association, which back in 2011 decided–to my delight–that it would be freely available on-line, from the current issue back to the first issue. Here, I\’ll start with Table of Contents for the just-released Summer 2018 issue, which in the Taylor household is known as issue #125. Below that are abstracts and direct links for all of the papers. I may blog more specifically about some of the papers in the next week or two, as well.

_________________

Symposium: Macroeconomics a Decade after the Great Recession

\”What Happened: Financial Factors in the Great Recession,\” by Mark Gertler and Simon Gilchrist
At the onset of the recent global financial crisis, the workhorse macroeconomic models assumed frictionless financial markets. These frameworks were thus not able to anticipate the crisis, nor to analyze how the disruption of credit markets changed what initially appeared like a mild downturn into the Great Recession. Since that time, an explosion of both theoretical and empirical research has investigated how the financial crisis emerged and how it was transmitted to the real sector. The goal of this paper is to describe what we have learned from this new research and how it can be used to understand what happened during the Great Recession. In the process, we also present some new empirical work. We argue that a complete description of the Great Recession must take account of the financial distress facing both households and banks and, as the crisis unfolded, nonfinancial firms as well. Exploiting both panel data and time series methods, we analyze the contribution of the house price decline, versus the banking distress indicator, to the overall decline in employment during the Great Recession. We confirm a common finding in the literature that the household balance sheet channel is important for regional variation in employment. However, we also find that the disruption in banking was central to the overall employment contraction.
Full-Text Access | Supplementary Materials

\”Finance and Business Cycles: The Credit-Driven Household Demand Channel,\” by Atif Mian and Amir Sufi
What is the role of the financial sector in explaining business cycles? This question is as old as the field of macroeconomics, and an extensive body of research conducted since the Global Financial Crisis of 2008 has offered new answers. The specific idea put forward in this article is that expansions in credit supply, operating primarily through household demand, have been an important driver of business cycles. We call this the credit-driven household demand channel. While this channel helps explain the recent global recession, it also describes economic cycles in many countries over the past 40 years.
Full-Text Access | Supplementary 

\”Identification in Macroeconomics,\” by Emi Nakamura and Jón Steinsson
This paper discusses empirical approaches macroeconomists use to answer questions like: What does monetary policy do? How large are the effects of fiscal stimulus? What caused the Great Recession? Why do some countries grow faster than others? Identification of causal effects plays two roles in this process. In certain cases, progress can be made using the direct approach of identifying plausibly exogenous variation in a policy and using this variation to assess the effect of the policy. However, external validity concerns limit what can be learned in this way. Carefully identified causal effects estimates can also be used as moments in a structural moment matching exercise. We use the term \”identified moments\” as a short-hand for \”estimates of responses to identified structural shocks,\” or what applied microeconomists would call \”causal effects.\” We argue that such identified moments are often powerful diagnostic tools for distinguishing between important classes of models (and thereby learning about the effects of policy). To illustrate these notions we discuss the growing use of cross-sectional evidence in macroeconomics and consider what the best existing evidence is on the effects of monetary policy.
Full-Text Access | Supplementary Materials

\”The State of New Keynesian Economics: A Partial Assessment,\” by Jordi Galí
In August 2007, when the first signs emerged of what would come to be the most damaging global financial crisis since the Great Depression, the New Keynesian paradigm was dominant in macroeconomics. Ten years later, tons of ammunition has been fired against modern macroeconomics in general, and against dynamic stochastic general equilibrium models that build on the New Keynesian framework in particular. Those criticisms notwithstanding, the New Keynesian model arguably remains the dominant framework in the classroom, in academic research, and in policy modeling. In fact, one can argue that over the past ten years the scope of New Keynesian economics has kept widening, by encompassing a growing number of phenomena that are analyzed using its basic framework, as well as by addressing some of the criticisms raised against it. The present paper takes stock of the state of New Keynesian economics by reviewing some of its main insights and by providing an overview of some recent developments. In particular, I discuss some recent work on two very active research programs: the implications of the zero lower bound on nominal interest rates and the interaction of monetary policy and household heterogeneity. Finally, I discuss what I view as some of the main shortcomings of the New Keynesian model and possible areas for future research.
Full-Text Access | Supplementary Materials

\”On DSGE Models,\” by Lawrence J. Christiano, Martin S. Eichenbaum and Mathias Trabandt
The outcome of any important macroeconomic policy change is the net effect of forces operating on different parts of the economy. A central challenge facing policymakers is how to assess the relative strength of those forces. Economists have a range of tools that can be used to make such assessments. Dynamic stochastic general equilibrium (DSGE) models are the leading tool for making such assessments in an open and transparent manner. We review the state of mainstream DSGE models before the financial crisis and the Great Recession. We then describe how DSGE models are estimated and evaluated. We address the question of why DSGE modelers—like most other economists and policymakers—failed to predict the financial crisis and the Great Recession, and how DSGE modelers responded to the financial crisis and its aftermath. We discuss how current DSGE models are actually used by policymakers. We then provide a brief response to some criticisms of DSGE models, with special emphasis on criticism by Joseph Stiglitz, and offer some concluding remarks.
Full-Text Access | Supplementary Materials

\”Evolution of Modern Business Cycle Models: Accounting for the Great Recession,\” Patrick J. Kehoe, Virgiliu Midrigan and Elena Pastorino
Modern business cycle theory focuses on the study of dynamic stochastic general equilibrium (DSGE) models that generate aggregate fluctuations similar to those experienced by actual economies. We discuss how these modern business cycle models have evolved across three generations, from their roots in the early real business cycle models of the late 1970s through the turmoil of the Great Recession four decades later. The first generation models were real (that is, without a monetary sector) business cycle models that primarily explored whether a small number of shocks, often one or two, could generate fluctuations similar to those observed in aggregate variables such as output, consumption, investment, and hours. These basic models disciplined their key parameters with micro evidence and were remarkably successful in matching these aggregate variables. A second generation of these models incorporated frictions such as sticky prices and wages; these models were primarily developed to be used in central banks for short-term forecasting purposes and for performing counterfactual policy experiments. A third generation of business cycle models incorporate the rich heterogeneity of patterns from the micro data. A defining characteristic of these models is not the heterogeneity among model agents they accommodate nor the micro-level evidence they rely on (although both are common), but rather the insistence that any new parameters or feature included be explicitly disciplined by direct evidence. We show how two versions of this latest generation of modern business cycle models, which are real business cycle models with frictions in labor and financial markets, can account, respectively, for the aggregate and the cross-regional fluctuations observed in the United States during the Great Recession.
Full-Text Access | Supplementary Materials

\”Microeconomic Heterogeneity and Macroeconomic Shocks,\” by Greg Kaplan and Giovanni L. Violante
In this essay, we discuss the emerging literature in macroeconomics that combines heterogeneous agent models, nominal rigidities, and aggregate shocks. This literature opens the door to the analysis of distributional issues, economic fluctuations, and stabilization policies—all within the same framework. In response to the limitations of the representative agent approach to economic fluctuations, a new framework has emerged that combines key features of heterogeneous agents (HA) and New Keynesian (NK) economies. These HANK models offer a much more accurate representation of household consumption behavior and can generate realistic distributions of income, wealth, and, albeit to a lesser degree, household balance sheets. At the same time, they can accommodate many sources of macroeconomic fluctuations, including those driven by aggregate demand. In sum, they provide a rich theoretical framework for quantitative analysis of the interaction between cross-sectional distributions and aggregate dynamics. In this article, we outline a state-of-the-art version of HANK together with its representative agent counterpart, and convey two broad messages about the role of household heterogeneity for the response of the macroeconomy to aggregate shocks: 1) the similarity between the Representative Agent New Keynesian (RANK) and HANK frameworks depends crucially on the shock being analyzed; and 2) certain important macroeconomic questions concerning economic fluctuations can only be addressed within heterogeneous agent models.
Full-Text Access | Supplementary Materials

Symposium: Incentives in the Workplace
\”Compensation and Incentives in the Workplace,\” by Edward P. Lazear
Labor is supplied because most of us must work to live. Indeed, it is called \”work\” in part because without compensation, the overwhelming majority of workers would not otherwise perform the tasks. The theme of this essay is that incentives affect behavior and that economics as a science has made good progress in specifying how compensation and its form influences worker effort. This is a broad topic, and the purpose here is not a comprehensive literature review on each of many topics. Instead, a sample of some of the most applicable papers are discussed with the goal of demonstrating that compensation, incentives, and productivity are inseparably linked.
Full-Text Access | Supplementary Materials

\”Nonmonetary Incentives and the Implications of Work as a Source of Meaning,\” by Lea Cassar and Stephan Meier
Empirical research in economics has begun to explore the idea that workers care about nonmonetary aspects of work. An increasing number of economic studies using survey and experimental methods have shown that nonmonetary incentives and nonpecuniary aspects of one\’s job have substantial impacts on job satisfaction, productivity, and labor supply. By drawing on this evidence and relating it to the literature in psychology, this paper argues that work represents much more than simply earning an income: for many people, work is a source of meaning. In the next section, we give an economic interpretation of meaningful work and emphasize how it is affected by the mission of the organization and the extent to which job design fulfills the three psychological needs at the basis of self-determination theory: autonomy, competence, and relatedness. We point to the evidence that not everyone cares about having a meaningful job and discuss potential sources of this heterogeneity. We sketch a theoretical framework to start to formalize work as a source of meaning and think about how to incorporate this idea into agency theory and labor supply models. We discuss how workers\’ search for meaning may affect the design of monetary and nonmonetary incentives. We conclude by suggesting some insights and open questions for future research.
Full-Text Access | Supplementary Materials

\”The Changing (Dis-)utility of Work,\” by Greg Kaplan and Sam Schulhofer-Wohl
We study how changes in the distribution of occupations have affected the aggregate non-pecuniary costs and benefits of working. The physical toll of work is less now than in 1950, with workers shifting away from occupations in which people report experiencing tiredness and pain. The emotional consequences of the changing occupation distribution vary substantially across demographic groups. Work has become happier and more meaningful for women, but more stressful and less meaningful for men. These changes appear to be concentrated at lower education levels.
Full-Text Access | Supplementary Materials

Individual Articles

\”Social Connectedness: Measurement, Determinants, and Effects,\” by Michael Bailey, Rachel Cao, Theresa Kuchler, Johannes Stroebel and Arlene Wong
Social networks can shape many aspects of social and economic activity: migration and trade, job-seeking, innovation, consumer preferences and sentiment, public health, social mobility, and more. In turn, social networks themselves are associated with geographic proximity, historical ties, political boundaries, and other factors. Traditionally, the unavailability of large-scale and representative data on social connectedness between individuals or geographic regions has posed a challenge for empirical research on social networks. More recently, a body of such research has begun to emerge using data on social connectedness from online social networking services such as Facebook, LinkedIn, and Twitter. To date, most of these research projects have been built on anonymized administrative microdata from Facebook, typically by working with coauthor teams that include Facebook employees. However, there is an inherent limit to the number of researchers that will be able to work with social network data through such collaborations. In this paper, we therefore introduce a new measure of social connectedness at the US county level. Our Social Connectedness Index is based on friendship links on Facebook, the global online social networking service. Specifically, the Social Connectedness Index corresponds to the relative frequency of Facebook friendship links between every county-pair in the United States, and between every US county and every foreign country. Given Facebook\’s scale as well as the relative representativeness of Facebook\’s user body, these data provide the first comprehensive measure of friendship networks at a national level.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

Mark Twain on Extrapolation: "Such Wholesale Returns of Conjecture Out of Such a Trifling Investment of Fact"

I sometimes say, with a smile and a wince, that it only takes three data-points for economists to start building a theory — and that in pinch, we can make due with less data. But of course, anyone who develops a theory based on limited data is prone to false extrapolations. Mark Twain offered one vivid example in his 1883 memoir Life on the Mississippi. This passage describes how there are places where the river loops back and forth in the shape of horseshoe curves. At some point, there is a cut-through (often caused by nature, but sometimes with an assist from those who saw the value of riverfront property). The river charges through the cut-through instead, and thus becomes shorter.

Twain uses this set of facts for a sarcastic jab a science and extrapolation. I quote here from the Project Gutenberg version of Life on the Mississippi, from near the start of Chapter 17. 

\”They give me an opportunity of introducing one of the Mississippi\’s oddest peculiarities,—that of shortening its length from time to time. … The water cuts the alluvial banks of the \’lower\’ river into deep horseshoe curves; so deep, indeed, that in some places if you were to get ashore at one extremity of the horseshoe and walk across the neck, half or three quarters of a mile, you could sit down and rest a couple of hours while your steamer was coming around the long elbow, at a speed of ten miles an hour, to take you aboard again. …

\”Pray observe some of the effects of this ditching business. Once there was a neck opposite Port Hudson, Louisiana, which was only half a mile across, in its narrowest place. You could walk across there in fifteen minutes; but if you made the journey around the cape on a raft, you traveled thirty-five miles to accomplish the same thing. In 1722 the river darted through that neck, deserted its old bed, and thus shortened itself thirty-five miles. In the same way it shortened itself twenty-five miles at Black Hawk Point in 1699. Below Red River Landing, Raccourci cut-off was made (forty or fifty years ago, I think). This shortened the river twenty-eight miles. In our day, if you travel by river from the southernmost of these three cut-offs to the northernmost, you go only seventy miles. To do the same thing a hundred and seventy-six years ago, one had to go a hundred and fifty-eight miles!—shortening of eighty-eight miles in that trifling distance. At some forgotten time in the past, cut-offs were made above Vidalia, Louisiana; at island 92; at island 84; and at Hale\’s Point. These shortened the river, in the aggregate, seventy-seven miles.

\”Since my own day on the Mississippi, cut-offs have been made at Hurricane Island; at island 100; at Napoleon, Arkansas; at Walnut Bend; and at Council Bend. These shortened the river, in the aggregate, sixty-seven miles. In my own time a cut-off was made at American Bend, which shortened the river ten miles or more.

\”Therefore, the Mississippi between Cairo and New Orleans was twelve hundred and fifteen miles long one hundred and seventy-six years ago. It was eleven hundred and eighty after the cut-off of 1722. It was one thousand and forty after the American Bend cut-off. It has lost sixty-seven miles since. Consequently its length is only nine hundred and seventy-three miles at present.

\”Now, if I wanted to be one of those ponderous scientific people, and \’let on\’ to prove what had occurred in the remote past by what had occurred in a given time in the recent past, or what will occur in the far future by what has occurred in late years, what an opportunity is here! Geology never had such a chance, nor such exact data to argue from! Nor \’development of species,\’ either! Glacial epochs are great things, but they are vague—vague. Please observe:—

\”In the space of one hundred and seventy-six years the Lower Mississippi has shortened itself two hundred and forty-two miles. That is an average of a trifle over one mile and a third per year. Therefore, any calm person, who is not blind or idiotic, can see that in the Old Oolitic Silurian Period,\’ just a million years ago next November, the Lower Mississippi River was upwards of one million three hundred thousand miles long, and stuck out over the Gulf of Mexico like a fishing-rod. And by the same token any person can see that seven hundred and forty-two years from now the Lower Mississippi will be only a mile and three-quarters long, and Cairo and New Orleans will have joined their streets together, and be plodding comfortably along under a single mayor and a mutual board of aldermen. There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.\”