A Skeptical Look at the Rigor of Antitrust Enforcement

Pro Publica is an nonprofit that does long-form investigative journalism. Like most such journalism, its work typically tells a story–which is to say that it has a slant. But the stories are also scrupulously sourced  and well-written in a strong-but-fact-based tone. The organization has recently published two stories that offer a skeptical look at antitrust enforcement.

Justin Elliott has written \”The American Way,\” which is subtitled \”President Obama promised to fight corporate concentration. Eight years later, the airline industry is dominated by just four companies. And you’re paying for it\” (October 11, 2016). Here\’s the start:

Three years ago, the Obama administration unleashed its might on behalf of beleaguered American air travelers, filing suit to block a mega-merger between American Airlines and US Airways. The Justice Department laid out a case that went well beyond one merger. \”Increasing consolidation among large airlines has hurt passengers,” the lawsuit said. “The major airlines have copied each other in raising fares, imposing new fees on travelers, reducing or eliminating service on a number of city pairs, and downgrading amenities.”

The Obama administration itself had helped create that reality by approving two previous mergers in the industry, which had seen nine major players shrink to five in a decade. In the lawsuit, the government was effectively admitting it had been wrong. It was now making a stand. Then a mere three months later, the government stunned observers by backing down. It announced a settlement that allowed American and US Airways to form the world’s largest airline in exchange for modest concessions that fell far short of addressing the concerns outlined in the lawsuit.

The Justice Department’s abrupt reversal came after the airlines tapped former Obama administration officials and other well-connected Democrats to launch an intense lobbying campaign, the full extent of which has never been reported. They used their pull in the administration, including at the White House, and with a high-level friend at the Justice Department, going over the heads of staff prosecutors. And just days after the suit was announced, the airlines turned to Chicago Mayor Rahm Emanuel, Obama’s first White House chief of staff, to help push back against the Justice Department.

Some lawyers and officials who worked on the American-US Airways case now say they were “appalled” by the decision to settle, as one put it.

Jesse Eisinger and Justin Elliott wrote the second article, called \”These Professors Make More Than a Thousand Bucks an Hour Peddling Mega-Mergers\” (November 16, 2016).  The subtitle reads: \”The economists are leveraging their academic prestige with secret reports justifying corporate concentration. Their predictions are often wrong and consumers pay the price.\” Here\’s a taste:

\”Economists who specialize in antitrust — affiliated with Chicago, Harvard, Princeton, the University of California, Berkeley, and other prestigious universities — reshaped their field through scholarly work showing that mergers create efficiencies of scale that benefit consumers. But they reap their most lucrative paydays by lending their academic authority to mergers their corporate clients propose. Corporate lawyers hire them from Compass Lexecon and half a dozen other firms to sway the government by documenting that a merger won’t be “anti-competitive”: in other words, that it won’t raise retail prices, stifle innovation, or restrict product offerings. Their optimistic forecasts, though, often turn out to be wrong, and the mergers they champion may be hurting the economy.

\”Some of the professors earn more than top partners at major law firms. Dennis Carlton, a self-effacing economist at the University of Chicago’s Booth School of Business and one of Compass Lexecon’s experts on the AT&T-Time Warner merger, charges at least $1,350 an hour. In his career, he has made about $100 million, including equity stakes and non-compete payments, ProPublica estimates. Carlton has written reports or testified in favor of dozens of mergers, including those between AT&T-SBC Communications and Comcast-Time Warner, and three airline deals: United-Continental, Southwest-Airtran, and American-US Airways.\”

What I especially like about these two Pro Publica stories is that they vividly illustrate two lessons worth keeping in mind, one about the economy and one about economic policy-making. 
One lesson is that there are concrete and well-researched examples of how mergers between firms have led to higher prices for consumers. Two examples that get a lot of attention in these articles are the recent spate of airline mergers, and the 2006 deal in which Whirlpool acquired its competitor Maytag. These examples (and others) help to emphasize the importance of competition, and the potential for government antitrust action to benefit consumers. As the articles readily acknowledge, most mergers are simply approved by the government with only cursory examination, which is appropriate, because they don\’t raise competitive issues of importance and firms should be able to make their own decisions (for better or worse) in a market-oriented economy. But in teaching and arguing, it\’s useful to have some examples of mergers that don\’t seem so harmless.

The second lesson is that government regulatory decisions are not formulated in a laboratory or a seminar room. They are made through a process of politics and law, which includes lobbying and behind-the-scenes pressure early in the process, which can then be followed by lawsuits and high-priced consultants if needed. Economists refer to a theory of \”regulatory capture,\” meaning that those who are being regulated have powerful incentives to learn to manipulate the regulatory apparatus. There are of course attempts to insulate the regulators from such pressures, and then to insulate the insulation. But those who supposed to be the targets of regulation can often find way to avoid or minimize the way the rules are applied to them, and also often find ways to use the regulatory apparatus to their own advantage–for example, to impose costs or block entry on other competitors. (For a classic statement of this argument, see George Stigler\’s 1971 article, \”The Theory of Regulation,\” in the Bell Journal of Economics and Management Science, available here or through JSTOR.)

An honest defense of the role of markets in economic activity needs to be warts-and-all, both pointing out strengths and acknowledging weaknesses and failings. An honest defense of the role of government regulation needs to be warts-and-all, too, which means not just discussing what angelically perfect regulators might accomplish, but how real-world regulators actually perform.

Nonstandard Employment Around the World

Standard employment refers to a situation where a worker has a full-time job, and can presume an ongoing relationship with the employer. Non-standard employment describes other kinds of jobs: daily, seasonal, short-term contract, part-time, temporary agencies,and others. Sometimes non-standard employment is what a worker wants; other times, it\’s all that seems to be available to a worker. But the distinction is potentially importance, because the relationship between employer and worker in a standard jobs is more likely to involve stable income, predictable hours of work, benefits, a safer workplace, on-the-job training, a few steps along a career path, a voice in the workplace. The International Labour Organization lays out available facts and evidence in its November 2016 report: \”Non-Standard Employment Around the World: Understanding challenges, shaping prospects.\”

As a starting point, it\’s useful to be clear that nonstandard employment is still employment–that is, working for a wage or salary–and thus it\’s not subsistence agriculture or what the ILO calls \”petty trade.\” In much of the world, substantial shares of the workforce don\’t work for a wage; indeed, one of the dividing lines between the very poor and the rising middle-class in a number of developing countries is whether workers in a household have an ongoing wage relationship. Here\’s a figure showing the prevalence of wage-related work around the world:

I won\’t run through all the evidence on categories of non-standard employment here, but as one example, consider the prevalence of temporary work. In the US, less than 5% of all wage employees are temp workers. In lots of European countries, it\’s 10-20%. In other countries around the world, it\’s not that unusual for more than half of wage employees to be temp workers. The upper panel is data on temp workers in Europe, while the bottom panel is the rest of the world.

Survey data in Europe suggests that about 60% of those in temp jobs would prefer a permanent job, but can\’t find one. Similarly, countries across Europe, about 10-15% of workers are on fixed-term contracts. In Europe, the US, and Canada, about 25% of workers are part-time, and survey results suggest that up to about one-third of this group would prefer full-time employment.

The evidence on trends in non-standard employment over time doesn\’t seem especially clear to me, since many of these categories are overlapping in some ways and not well-defined. In some low-income and middle-income countries, people may be moving from non-wage work to non-standard employment. But as a broad statement, it does seem that non-standard employment is on the rise, because lots of firms like the flexibility that it offers. Indeed, many firms now think of themselves as having a \”core\” set of mostly full-time employees, surrounded by a periphery of other employment relationships that include part-timers, temps, and work that is contracted out or outsourced in some way. The result can be a two-track labor force, made up of insiders and outsiders.

Non-standard employment raises difficult questions of public policy. For some, the flexibility of part-time, seasonal, or contract work is an attractive feature. For others, non-standard work is a stepping-stone, a mechanism to get a foot in the door and show what you can do. But for others, non-standard work is a trap, in which you are treated as a labor commodity that can easily be replaced, and it is not clear. The ILO report has a lengthy discussion of the studies on work hours, pay conditions, health and safety, and so on. Here\’s a flavor of what it says about \”stepping stones\” and \”traps\” (footnotes and references to figures are omitted):

Evidence on the prevalence of “stepping stones” versus “traps” phenomena can be examined both through the length and the probability of transitions between various employment statuses for different types of NSE [non-standard employment]. The evidence on these two aspects of transitions shows that, in a vast majority of examined countries, yearly transitions from non-standard to standard employment remain below 55 per cent,  and even below 10 per cent in some instances (see review of literature given in the Appendix to this chapter, table A5.1). The “stepping-stone” hypothesis is confirmed  in some instances (Denmark, Italy, the Netherlands, United States), in that being employed in a temporary job, rather than being unemployed, significantly increases the probability of obtaining a regular job. The effect varies, however, for specific population groups. It seems to be strongest for young graduates, immigrants and workers initially disadvantaged either in terms of education or of pay. These are indeed the workers for whom the benefits of having lower initial screening, obtaining general rather than specific work experience, and expanding their network through nonstandard jobs are high. In some instances, for example in Uganda, men also have a higher likelihood of moving out of part-time and temporary work to full-time permanent employment, compared to women, who seem to be penalized in terms of labour market transitions. However, when temporary work is further liberalized and the pool of temporary workers increases, then longer-term evidence, such as for Spain or Japan, suggests that over a lifetime of working, those workers who started off with a temporary job have a greater chance of switching between non-standard work and unemployment, compared to workers who start with a permanent contract. In these cases, temporary
work ceases to be a stepping stone. Most recent evidence from European countries also shows that there is a negative correlation between the share of temporary workers  among employees and the share of temporary employees who moved to permanent employment.

A natural political reaction here is to start passing laws about how employees can be treated, and a number of countries have tried to pass laws, for example, that companies are not allowed to hire workers on fixed-term contracts to do \”permanent\” work, or to pass rules about minimum hours, minimum wages, predictability of hours,  limits on hiring temp worker or on renewing temp arrangements repeatedly, limits on on-call contracts, bringing nonstandard workers into collective bargaining arrangements, and more. The ILO offers an extended discussion of the prevalence such policies.

My own sense is that while such regulations may often be beneficial in preventing abusive practices, I am not confident that a focus on telling firms what they need to do is a sufficient answer. Indeed, many countries have found themselves in a situation where they pass rules to limit certain practices in non-standard employment, but then immediate start carving out exceptions that keep allowing flexibility in jobs for the young, the long-term unemployed, the old, those trying exit from public assistance, and so on. Also, when government passes lots of rules governing what firms must do in an employment relationship, firms are going to think twice before hiring at all–and are going to take a second look at how they can outsource or automate various tasks. Thus, it seems important to me have some deeper research on the incentives that firms have to use non-standard employment, and whether those incentives might be tweaked, as well as to think about institutions outside the job relationship that might help non-standard employees by organizing mechanisms to finance additional job training, health insurance, retirement savings, and protection against fluctuations in earnings. 

Active Labor Market Policies: Time for Aggressive Experimentation

Both passive and active labor market policies are aimed at the unemployed and underemployeed. Passive labor market policies refer to providing financial assistance, like unemployment insurance, to those without job. Active labor market policies refer to some combination of job search assistance, training, and government-subsidized private- or public sector employment.  In both areas, the US lags behind other most other high-income countries.

Here\’s an abridged form of a table from the OECD Employment Outlook 2016. Of course, the amount that countries spend on either active or passive labor market policies will be related to the number of unemployed, so for illustrative purposes it\’s perhaps useful to focus on 2013, when the US unemployment rate was still in the range of 7-8% and thus more comparable to typical unemployment rates in many European countries.

As the graph shows, the OECD average for spending on labor market policies was 1.46% of GDP in 2013; for comparison, the US total was about one-fourth that amount at .36% of GDP. To break this down, the OECD average was that .54% of GDP on active labor market policies and .92% of GDP on passive labor market policies in 2013. The comparable figures for the US were .12% of GDP on active labor market policies and .24% of GDP on passive labor market policies in 2013.

Do active labor market policies pay off? David Card, Jochen Kluve and Andrea Weber discuss their literature review of 207 studies evaluating such policies in \”Active labour market policies and
long-term unemployment,\” which appears as Chapter 2 in the November 2016 VoxEU.org ebook, Long-Term Unemployment After the Great Recession: Causes and Remedies, edited by Samuel Bentolila and Marcel Jansen.

As a suggestive set of facts about how such policies might work, consider this figure from Card, Kluve, and Weber. The horizontal axis shows the share of GDP spent on active labor market policies the vertical axis shows long-term unemployment (defined here as being unemployed and looking for work for a year or more) as a share of total unemployment. The correlation is far from perfect, and there are a number of countries with less-than-average spending on active labor market policies and less-than-average long-term unemployment as a share of total unemployment.

In more specific terms, their summary of the evidence is that such studies have often found positive and statistically significant effects. They break down the numbers in a variety of ways, but this figure shows the share of studies in their literature review that found positive effects on three different groups–the short-term unemployed (UI), long-term unemployed (LTU) or the \”disadvantaged\” who have a low level of educational background or live in an area with few job prospects. The graph shows that about half the studies find positive effects of active labor market policies for these groups in the short run–with the effects being lower for the short-term unemployed–but 60% or more of the studies find positive effects of such policies for all of  these groups in the long-term.

The full version of the Card, Kluve and Weber literature review is freely available online as a \”What Works? A Meta Analysis of RecentActive Labor Market Program Evaluations,\” published by the German Institute for the Study of Labor (IZA Discussion Paper No. 9236, July 2015).

 A skeptic might reasonably point out that this glass is also half-empty: that is, a lot of the evaluation studies also didn\’t find statistically significant effects, and there could be \”publication bias\” in which studies that don\’t find an effect are more likely to end up unpublished. My own take would be that there is a sufficient evidence for looking at the kinds of policies that have worked best in other high-income countries, and trying some large-scale experiments with such policies in the US. In Chapter 3 in the VoxEU.org ebook, Lawrence F. Katz, Kory Kroft, Fabian Lange, and Matthew J.
Notowidigdo present this view in their essay, \”Addressing long-term unemployment in the aftermath
of the Great Recession.\” They write:

\”Our reading of the evidence [from literature reviews] … is that programme effectiveness varies a lot across ALMPs [active labor market policies)and that relatively little is known about what generates this heterogeneity. In particular, little is known whether the heterogeneity is linked to the design of the ALMP or to the treated populations. This reading of the evidence is reinforced by the experimental evidence of the UK Employment Retention and Advancement demonstration (ERA). The ERA experiment covered three different populations, including a group of long-term unemployed. Over the medium term, the treatment was only effective for the long-term unemployed. For the long-term unemployed, the treatment consisted primarily of financial rewards for searching for jobs and for maintaining employment once a job was found. This treatment was found to generate fairly large effects on employment and earnings over the 5-year study period and was found to be cost-effective. The evidence of the ERA thus suggests that financial incentives can help reintegrate the long-term unemployed into the labour market. The ERA however also provides evidence that programme effectiveness varies substantially with the treated population.\”

These authors also worry about the possibility of \”displacement\” effects, in which active labor market just reshuffle a fixed number of jobs, making some more likely to get those jobs than others, but without expanding the overall number of job opportunities. They suggest that displacement may be a problem when the economy is especially weak. They write:

\”One speculative possibility, based on the (weak) existing evidence, is to focus on providing unemployment assistance and long-term training to the long-term unemployed in the depths of a downturn, but then move towards more aggressive use of ALMPs, such as job-search assistance and hiring subsidies, to try and re-employ the long-term unemployed, as the labour market tightens in a recovery.\”

It seems to me that the possibility of a dramatic expansion of active labor market policies through aggressive experimentation deserves more attention in the current US economy. Moreover, when US workers express how traumatized they feel by the tectonic changes that are shaking up US labor markets, it\’s worth contemplating the fact that the US does far less than typical high-income countries either to cushion the shocks or to help in the transition to new jobs.

For those interested in this topic, I\’ve posted before about \”The Case for Active Labor Market Policies\” (December 5, 2011).

Alexander Hamilton on Infant Industries

The idea that government should offer temporary support for starting up a new industry, until it becomes established and can stand on its own, has become known as the \”infant industry\” argument. The \”infant\” terminology, as well as a prominent example of the underlying argument, dates back to Alexander Hamilton\’s 1791 \”Report on Manufactures\” (available at various places around the web, like here and here).

However, Hamilton\’s discussion of how to support what he called \”infant manufactures\” (a term he uses only one time) isn\’t quite as simple as just using tariffs to protect a domestic industry from foreign competition. He is focused how to encourage new manufacturing industries, and he argues that \”bounties–that is, subsidies–funded from other government revenues are the best way to do so. Hamilton only argues for import tariffs as one way of collecting revenue for the government to provide such \”bounties.\” A few points worth noting about this line of argument would include:

1) Hamilton prefers bounties over tariffs, in part, because subsidies avoid raising the price of the good–as tariffs would tend to do. He recognizes that import tariffs cause an \”expense to the community,\” in the form of higher prices,and in this sense they are just like a tax that is used to support a government subsidy.  Conversely, Hamilton argues that explicit subsidies will tend to attract additional competition.

2) Hamilton also favors bounties because he notes that limits on imports only give a domestic producer an advantage in the home market, without providing any incentive for exporting. This suggests that the goal of supporting an infant industry, and a way such a policy can be judged, is whether it leads to export sales.

3) Hamilton emphasizes that any such \”infant manufactures\” policy should be temporary.

4) Hamilton also points out that the US manufacturers of his time already have a considerable advantage over foreign competition because they have much lower costs of transportation.

5) Hamilton\’s \”infant manufactures\” argument is not an argument for tariffs to protect jobs in existing established industries. It applies to subsidizing new industries.

6) From a political economy point of view, it\’s interesting to contemplate whether how a policy of using revenue from tariffs to fund subsidies for domestic industry would play in the contemporary world. It would make explicit, in the form of government collecting revenue and cutting checks, that that tariffs imposing costs to benefit a domestic industry. I don\’t know if this would make such a policy more popular or less so.

7) It\’s interesting to reconsider Hamilton\’s argument for bounties in the context of the modern economy. Modern governments have a lot of tools that give indirect support to new industry: for example, support of education at K-12 and university level, support of R&D, well-developed institutions of intellectual property, legal institutions that support enforceable contracts and financial markets, and so on. Modern governments also have sources and quantities of revenue undreamed  of in Hamilton\’s time: thus, a standard modern argument against tariffs is that if you want to subsidize a certain industry (and I\’m leaping over a Grand Canyon of arguments with that \”if\”), then setting up a situation in which domestic producers can charge consumers more (because of limited competition) is an odd and potentially counterproductive way to do it.

Here\’s the relevant passage from Hamilton:

Pecuniary bounties. This has been found one of the most efficacious means of encouraging manufactures, and is, in some views, the best. Though it has not yet been practised upon by the Government of the United States (unless the allowance on the expiration of dried and pickled fish and salted meat could be considered as a bounty), and though it is less favored by public opinion than some other modes, its advantages are these:

 1. It is a species of encouragement more positive and direct than any other, and, for that very reason, has a more immediate tendency to stimulate and uphold new enterprises, increasing the chances of profit, and diminishing the risks of loss, in the first attempts.

 2. It avoids the inconvenience of a temporary augmentation of price, which is incident to some other modes; or it produces it to, a less degree, either by making no addition to the charges on the rival foreign article, as in the case of protecting duties, or by making a smaller addition. The first happens when the fund for the bounty is derived from a different object (which may or may not increase the price of some other article, according  to the nature of that object), the second, when the fund is derived from the same, or a similar object, of foreign manufacture. One per cent. duty on the foreign article, converted into a bounty on the domestic, will have an equal effect with a duty of two per cent., exclusive of such bounty; and the price of the foreign commodity is liable to be raised, in the one case, in the proportion of one per cent.; in the other in that of two per cent. Indeed the bounty, when drawn from another source, is calculated to promote a reduction of price; because, without laying any new charge on the foreign article, it serves to introduce a competition with it, and to increase the total quantity of the article in the market.

3. Bounties have not, like high protecting duties, a tendency to produce scarcity. An in-crease of price is not always the immediate, though, where the progress of a domestic manufacture does not counteract a rise, it is, commonly, the ultimate effect of an additional duty. In the interval between the laying of the duty and the proportional increase of price, it may discourage importation, by interfering with the profits to be expected from the sale of the article.

4. Bounties are, sometimes, not only the best, but the only proper expedient for uniting the encouragement of a new object of agriculture with that of a new object of manufacture. It is the interest of the farmer to have the production of the raw material promoted by counteracting the interference of the foreign material of the same kind. It is the interest of the manufacturer to have the material abundant and cheap. If, prior to the domestic production of the material, in sufficient quantity to supply the manufacturer on good terms, a duty be paid upon the importation of it from abroad, with a view to promote the raising of it at home, the interest both of the farmer and manufacturer will be disserved. By either destroying the requisite supply, or raising the price of the article beyond whit can be afforded to be given for it by the conductor of an infant manufacture, it is abandoned or fails, and there being no domestic manufactories to create a demand for the raw material, which is raised by the farmer, it is in vain that the competition of the like foreign article may have been destroyed.

It cannot escape notice, that a duty upon the importation of an article can no otherwise aid the domestic production of it, than by giving the latter greater advantages in the home market. It can have no influence upon the advantageous sale of the article produced in foreign markets — no tendency, therefore, to promote its exportation.

The true way to conciliate these two interests is to lay a duty on foreign manufactures of the material, the growth of which is desired to be encouraged, and to apply the produce of that duty, by way of bounty, either upon the production of the material itself, or upon its manufacture at home, or upon both. In this disposition of the thing, the manufacturer commences his enterprise under every advantage which is attainable, as to quantity or price of the raw material; and the farmer, if the bounty be immediately to him, is enabled by it to enter into a successful competition with the foreign material. If the bounty be to the manufacturer, on so much of the domestic material as he consumes, the operation is nearly the same; he has a motive of interest to prefer the domestic commodity, if of equal quality, even at a higher price than the foreign, so long as the difference of price is any thing short of the bounty which is allowed upon the article.

Except the simple and ordinary kinds of household manufacture, or those for which there are very commanding local advantages, pecuniary bounties are, in most cases, indispensable to the introduction of a new branch. A stimulus and a support, not less powerful and direct, is, generally speaking, essential to the overcoming of the obstacles which arise from the competitions of superior skill and maturity elsewhere. Bounties are especially essential in regard to articles upon which those foreigners, who have been accustomed to supply a country, are in the practice of granting them.

The continuance of bounties on manufactures long established, must almost always be of questionable policy: because a presumption would arise, in every such case, that there were natural and inherent impediments to success. But, in new undertakings, they are as justifiable as they are oftentimes necessary.

There is a degree of prejudice against bounties, from an appearance of giving away the public money without all immediate consideration, and from a supposition that they serve to enrich particular classes, at the expense of the community. But neither of these sources of dislike will bear a serious examination. There is no purpose to which public money can be more beneficially applied, than to the acquisition of a new and useful branch of industry; no consideration more valuable, than a permanent addition to the general stock of productive labor.

 As to the second source of objections it equally lies against other modes of encouragement, which are admitted to be eligible. As often as a duty upon a foreign article makes an addition to its price, it causes an extra expense to the community, for the benefit of the domestic manufacturer. A bounty does no more. But it is the interest of the society, in each case, to submit to the temporary expense-which is more than compensated by all increase of industry and wealth; by an augmentation of resources and independence; and by the circumstance of eventual cheapness, which has been noticed in another place. It would deserve attention, however, in the employment of this species of encouragement in the United States, as a reason for moderating the degree of it in the instances in which it might be deemed eligible, that the great distance of this country- from Europe imposes very heavy charges on all the fabrics which are brought from thence, amounting to from fifteen to thirty per cent. of their value, according to their bulk

Economics and Satellite Data

Empirical work in economics has traditionally been based on numerical data: for example, data on prices and quantities or demographics, or data from government statisticians on GDP, employment, inflation, and the level of trade, or survey data on attitudes. But economics is starting to use data from satellites in a systematic way. Dave Donaldson and Adam Storeygard describe what\’s happening in \”The View from Above: Applications of Satellite Data in Economics,\” in the Fall 2016 issue of the Journal of Economic Perspectives. (Full disclosure: I\’ve worked as Managing Editor of the JEP since the first issue in 1987.)

The possibilities of using imagery from the sky for illustrating economic differences has been apparent for some years. Two of the most prominent examples involve national borders: South and North Korea at night, and Haiti and the Dominican Republic in the daylight.

Here\’s a NASA photograph from 2014 of North and South Korea. For those of you whose Asian geography is a little shaky, these two Koreas countries sit on a peninsula, which is attached to China. In the bottom right of the picture is South Korea. The giant city of Seoul is the brightest light. while smaller cities like Gunsan show up as smaller areas of light. In the upper right portion of the photo is China. The dark area between the two–the land mass between the Yellow Sea and the Sea of Japan–is North Korea, whose capital city of Pyongyang appears as a very small illuminated dot. This picture is clearly telling something about electricity availability and consumption, and also about broader standard of living, in North Korea.

Here\’s some NASA imagery of the border between Haiti and the Dominican Republic, which share the island of Hispaniola. The Haiti side is deforested, almost denuded; the Dominican Republic side is not. The causes of deforestation are typically a combination very poor people looking for any firewood to burn and overgrazing of livestock. Again, this image tells you something about standard of living and goverance in two neighboring countries.

But as social scientists sometimes say, the plural of \”anecdote\” is not \”data.\” To have data, you need more than images. As Donaldson and Storeygard point out, a number of technological advances are coming together here. Satellite data is getting more detailed. They report:

\”Much of the publicly available satellite imagery used by economists provides readings for each of the hundreds of billions of 30-meter-by-30-meter grid cells of land surface on Earth. Many economic decisions (particularly land use decisions such as zoning, building types, or crop choice) are made at approximately this same level of spatial resolution. But since 1999, private companies have offered submeter imagery and, following a 2014 US government ruling, American companies are able to sell imagery at resolutions below 0.5 meters to nongovernment customers for the first time. This is important because even when a coarser unit of analysis is appropriate, 900 1-meter pixels provide far more information available for signal extraction than a single 30-meter pixel covering the same area.\”

Satellite imagery is of course not primarily about pretty photographs, but can gather a range of images across the spectrum. Machine learning and artificial intelligence can be used to categorize this data. Advances in computing power and clever algorithms for dealing with \”gridded\” data are making it possible to process all this information and to analyze it. The results provide a window on economic activity that is outside the traditional economic data.

For example, in a lot of places the story from \”night lights\” may tell you as much about GDP or economic growth as any official government statistics. They write:

\”In an annual panel of countries from 1992 to 2008, as well as the corresponding long difference, these authors estimate a lights-GDP elasticity of 0.28 to 0.32, with no evidence of nonlinearity or asymmetry between increases and decreases in lights. In the long difference, the lights-GDP relationship has a correlation coefficient of 0.53. Under a range of assumptions about measurement error of GDP in countries with good data, they estimate a structural elasticity of lights growth with respect to GDP growth of between 1.0 and 1.7.\”

In some cases, satellite imaged may offer a less-biased source of information. If you want to know about deforestation in Indonesia or Brazil, for example, satellite images may tell you more than government data reported by local politicians. If you want to know about the level of particulate air pollution, satellite images may be more trustworthy than government reporting.

If you want to know about how many buildings are in the mega-slums of certain cities in the developing world, looking at satellite images may beat government data. If you want to know about recent building, you can compare satellite images over time, or one study looked at the reflectivity of roofs–because more recent buildings will tend to have roofs that reflect more light. Here\’s an image from their paper of roofs in a city in Nairobi:

If you want to know about how drought affects armed conflict, satellite images may be the best data you can get. If you want to know about how local topography and weather affects the spreading of cities, or the location of dams, or the possibilities for altering crop production patterns, satellite data can be enormously useful.

If you want to know about poverty in developing countries, a recent paper in Science shows how to measure poverty with images of things like roads, water, and buildings that can be converted into estimates and maps of household consumption and assets for low-income areas.

Of course, traditional economic data is not going away, and many studies with satellite data used the new data together with traditional data. But the satellite data is getting better and more available at a rapid clip. Among the more recent examples are studies that use the data to \”count crowds of people and cars at events such as protests, political rallies, or peak shopping periods … Similarly, traffic volumes could be measured from snapshots of car densities.\” It\’s a new research frontier.

The Differences Between Selective and Nonselective in Higher Education

Caroline M. Hoxby delivered the 2016 Martin Feldstein Lecture at the National Bureau of Economic Research in July An edited and annotated version of her talk, \”The Dramatic Economics of the U.S. Market for Higher Education,\” is now available in the NBER Reporter (2016, Number 3, pp. 1-6).  You can also watch the full hour-long lecture online and view the slides at the NBER website.

Hoxby focuses much of her discussion on a comparison of more-selective and less-selective institutions. She presents evidence that the more-selective institutions spend more per student, but also have higher value-added per student. Interesting and perhaps disconcerting conclusions follow. She defines selectivity in this way:

\”Selectivity is holistic but, roughly speaking, the \”most\” selective institutions\’ average student has a combined (math plus verbal) SAT (or translated ACT) score above 1300, the 90th percentile among test-takers. (Since some students do not take the tests, this corresponds to the 96th percentile among all students.) \”Highly\” or \”very\” selective institutions have an average student with combined scores above the 75th percentile (about 1170). \”Selective\” (without a modifier) institutions ask students to submit scores, grades, and other materials and turn down those judged to be inadequately prepared. Schools with combined scores above 1000 (the 47th percentile) are at least modestly selective. Non-selective schools usually only require that a student have a high school diploma or the equivalent and often have average combined scores of 800 (the 15th percentile) or below. The divide between non-selective and modestly selective schools is rough but somewhere between 800 and 1000.\” 

In terms of the market for higher education, selective institutions compete with each other, and students applying to these institutions tend to apply outside their home area. Nonselective institutions face less competition, because most of their students are drawn from close to their geographic location.

Hoxby shows that selective colleges and universities spend a lot more per student than nonselective schools. Each dot on the graph represents a college or university. The horizontal axis shows the median SAT score for the school, so less selective schools are on the left and more selective schools are on the right. The vertical axis graphs \”instructional resources\” with light blue dots and \”core student resources\”–which also includes other students services and academic support–with the dark blue dots. This who attend a school where the median SAT is above about 1200 get a lot more resources than those whose attend less selective schools.

Figure1

Indeed, the total education resources spent on students who attend selective colleges over their lifetime shows an even bigger gap, because such students are more likely to complete a four-year degree and to go on to graduate school education.

Not surprisingly, those who attend more selective colleges earn higher wages on average. What is perhaps more interesting is that even after using various methods to adjust for the differing quality of students, the higher selectivity schools also have higher value-added. For example, Hoxby has the data to do comparisons between students with similar test scores who apply to similar places, but end up at institutions with slightly different selectivity, or to compare student with similar test scores, but only some of whom were admitted to a school of greater selectivity. In this figure, the dark-blue dots and the right-hand axis shows the total wage and salary earnings over a lifetime for graduates of schools with different levels of selectivity. The light-blue dots show the value-added of institutions of different selectivity–that is, how much they add to wages after adjusting for the fact that their students had higher test scores in the first place.

Figure4

There\’s an important underlying assumption here. For simplicity, Hoxby assumes that the value-added to attending a less-selective college is zero, and then measured the value-added of more-selective colleges relative to less-selective colleges.

Thus, the more selective institutions spend more per student, but also have higher value-added per student. If you take the gains per student and divide by the cost per student, you have a  measure of productivity. Hoxby does the calculation, and finds these patterns: Productivity is lower for low-selectivity schools, which mean that although students at these schools have much less spent on their higher education, they have even-lower value-added from that education (that is, even lower wage gains after taking their test scores into account). However, productivity is basically flat for schools ranging from moderately selective to highly selective. In other words, as schools become more selective they spend more on students but also have greater value-added for students (again, after adjusting for the test scores of those students), and these two factors tend to balance out so that productivity of a moderately selective school is pretty much the same as at a highly selective school.

Figure6

Hoxby points out some interesting and perhaps disconcerting implications that flow from this analysis.

A common reaction that many people have to the first graph shown above is that it seems potentially inefficient that the selective schools spend more per student. Might the funds spent on higher education have a greater social return if, say, funds from Harvard or Stanford were instead spent at a less-selective school? Hoxby\’s analysis suggests that student with greater readiness for college who attend selective institutions also have higher value-added from a dollar of spending on higher education. From a broad social point of view, it makes economic sense to spend more on those at more-selective institutions.

Hoxby\’s analysis also suggests that if our social goal is to expand the number of students in college, it makes more sense to focus more heavily on improving K-12 education so that a greater number of students are college-ready, rather than just offering more loans or aid for college students. If improvements in K-12 education led to a substantially higher share of students being college-ready, then it would actually make sense from a social point of view to dramatically expand college spending per student–because students who are more college-ready would be positioned to take greater advantage of what college has to offer. In contrast, providing additinoal financing to send a greater share of students with low test scores to less-selective colleges may not have much social payoff.

However, Hoxby also provides evidence that there is considerable variability in the productivity of less-selective schools. It turns out that the 90th percentile of less selective schools actually has much higher productivity than the selective schools, while the 50th percentile and below of less selective schools have much lower productivity than the selective schools. Identifying the top 10-20% of less-selective institutions can be useful for students, and also for creating competitive pressures that help these schools to expand and put pressure on the less-selective and low-productivity schools to improve.

Fodder to Chew Over for US Election Day

As we all struggle with our emotions about the campaign and the potential outcome of this 2016 US election day, here is some food for thought from posts during the past couple of years about US voting and elections.

1) \”Ray Fair: The Economy is Tilting Republican\” (May 19, 2016)

Fair is a prominent macroeconomist who, every four years, looks at data on the economy and political outcomes. His analysis is that the relatively slow growth of the US economy should be tending to favor Republicans this year. Fair\’s most recent predictions are here. Of course, looking only at how past economic performance affect election outcomes means that all other factors about the personalities, performance, and policies of the candidates are not included–and those other factors probably matter a lot in the 2016 presidential election.

2) \”Sketching State Laws on Administration of Elections\” (September 26, 2016)

A US national election is actually 50 state elections, where the rules can vary along a number of dimensions like requirements for voter ID, what is allowed in terms of absentee or early voting, what determines recounts, and more.

3) \”Investigating Why People Vote\” (March 8, 2014)

One of classic questions in political economy is why anyone should bother to vote, given that the chance of your vote determining the outcome is so small. This clever experiment mixed together data on interviewing people about why they voted or not voted, and then comparing what people said to the actual data on whether they had voted. Thus, the authors can draw conclusions like \”Voters do not feel pride from saying they voted, but non-voters do feel shame\” and \”Non-voters lie and claim they voted half the time, while voters tell the truth.\”

4) \”What is Discouraging the Registered Voters Who Don\’t Vote?\” (July 28, 2015)

One of the surveys done by the US Census Bureau investigates this question. The answers are somewhat predictable, like \”too busy,\” \”out of town,\” \”ill,\” \”did not like candidates or campaign issues\” and others.

5) \”Should Voting Be Compulsory?\” (November 6, 2012)

I don\’t think so, but many countries disagree. Here\’s some evidence on compulsory voting, and links to arguments by modern political philosophers on both sides.

6) \”Ticket Splitting in US Elections\” (June 21, 2016)

The practice of voting for a presidential candidate of one party while voting for a House of Representatives candidate of the other party seems to been lower in the last few elections than it used to be.

7) \”The Rise in Political Polarization: Both Real and Exaggerated?\” (May 10, 2016)

Political polarization seems to be rising in the US, but people\’s perceptions of how much political polarization has risen seem to be exaggerated.


8) \”Political Polarization and Confirmation Bias\” (October 27, 2014)

Confirmation bias is the well-researched psychological insight that when people get new information, they tend to interpret that information in a way that confirms and strengthen their preexisting beliefs. Indeed, when two groups of people have opposite beliefs on an issue like, say, capital punishment, getting the same information causes both groups to be strengthened in the belief that they are correct! In this short essay, I discuss my concern that strong political partisanship is often interrelated with confirmation bias.

US Sentiments on Protectionism: A Breakdown

In the heat of the political season, it sometimes sounds as if American opinion has made a sharp swerve in the direction of opposing free trade. However, Cullen S. Hendrix argues that the data tells a more nuanced story in \”Protectionism in the 2016 Election: Causes and Consequences, Truths and Fictions,\” published by the Peterson Institute for International Economics (Policy Brief 16-20, November 2016). Cullen writes:

\”Two prominent explanations—that free trade is popular with elites but unpopular with the masses and that younger generations are more protectionist than older ones—can be rejected. Another three—the “China shock” sinking in, the current US electoral map privileging protectionist sentiment, and modern free trade agreements’ growing complexity—are more compelling. At root, the turn toward protectionism is the result of a disconnect between the United States’ rising trade exposure and a failure to adopt the social expenditure policies that have accompanied open markets in other advanced economies.\”

In response to the belief that free trade has become uniquely unpopular in the US, Hendrix offers some actual survey evidence. For example, here are answers to a Gallup poll question on whether trade is an \”opportunity\” or \”threat.\” The two answers were tied around 2012, but \”opportunity\” has surged ahead since then.  dipped a bit.

Or  here\’s survey evidence from the Pew Foundation on whether trade agreements have been good or bad for the US economy. \”Good\” is down a bit in the last couple of years, but still in the lead.

More detailed breakdowns of the survey data show that young people are more likely to favor free trade than older people. Cullen writes: \”In the main, younger voters are not clamoring for a
return to more protectionist policies. Indeed, most have never known a world without the World Trade Organization (WTO) and NAFTA; for younger voters, free trade and FTAs [free trade agreements] are the default state of affairs. Anti-FTA rhetoric hits home with people nearing retirement age, but not as much with older or younger Americans.\”

So why do politicians of all parties pushing for protectionism in 2016? There does seem to be a pattern that support for protectionism rises in election years. But in particular, protectionism tends to be more popular in the \”battleground\” states that decide national elections. Hendrix writes: \”In the 11 battleground states in the 2016 election (Colorado, Florida, Iowa, Michigan, Nevada, New Hampshire, North Carolina, Ohio, Pennsylvania, Virginia, and Wisconsin), the pattern is striking. In only one (Colorado, with 9 electoral votes) did the majority of respondents report that free trade had definitely or probably helped them or their families. In the remaining 10 (which together hold 137 electoral votes), the proportions agreeing with the statement ranged from 25 percent in Michigan
(where Bernie Sanders won the Democratic primary) to 27 percent in Ohio and 48 percent in Virginia.

In other words, if California and New York and Texas were the swing states, rather than being locked in for one party or another, politicians would be a lot less likely to talking up their skepticism about trade.

The other issue that Hendrix points out is that free-trade agreements have become vastly more complex, as they have less to do with just reducing tariffs and more to do with setting common standards for competition and regulation across industries. He notes: \”The text of the Israel-US Free Trade Agreement, which entered into effect in 1985, totals roughly 7,500 words. The text of the TPP [Trans-Pacific Partnership], inclusive of annexes and schedules, runs to roughly 2 million words—an astonishing 266 times longer. The TPP is not an isolated case: FTAs have grown longer and more complex over time, as they move beyond tariff barriers to touch on a host of regulatory issues related to market access, harmonization of standards, environmental issues, intellectual property, and health and safety.\”

This added complexity means that those who are just fundamentally opposed to trade can point to specific controversial provisions, and politicians can twist and dodge on the issue by saying that while they favor free trade in general, certain provisions of this particular agreement aren\’t acceptable.  Of course, in agreements between multiple nations that are several million words in length, there will always be controversial and just plain wrong-headed provisions.

Finally, Hendrix points out that the many European countries have embraced openness to international trade–for example, through the European Union–but have accompanied that embrace with generous programs for supporting displaced worker. Indeed, perhaps the defining characteristic of the what is sometimes called the \”Nordic model\” of the Scandinavian economies involves an embrace of free trade and market capitalism that is combined with high taxation and an active welfare state.

While I largely agree with Hendrix argument, I\’d add this point. Compared to a lot of countries around the world, US support for open international trade tends to be relatively low. I suspect that this situation arises because most other countries around the world have for a long time had much more exposure to international trade than does the US economy, which has an enormous internal market. Moreover, those who feel threatened by international trade tend to have much larger political megaphones than those who support it: indeed, free trade is a classic case of a situation with widespread but diffused benefits and more concentrated costs. During political campaigns, the US voices opposed to trade get louder and the defenders tend to duck their heads and wait for the wave to pass.

Early Childhood Education: Promises and Practicalities

Like many people, I find myself enormously attracted to the idea of early childhood education: that is, the idea that a well-chosen intervention aimed at small children could pay off over the longer term in improved academic performance and a reduced incidence of undesirable social behaviors like high school dropout rates, crime, or single teen mothers. But while the actual evidence on such programs offers some reason for encouragement, and certain provides a basis for additional experimentation, it\’s not as strong as I would like. The Fall 2016 issue of Future of Children has a 10-paper symposium, plus an overview essay, about the existing research on \”Starting Early: Education from Prekindergarten to Third Grade.\”

In the overview essay, \”Starting Early: Introducing the Issue,\” Jeanne Brooks-Gunn, Lisa Markman-Pithers, and Cecilia Elena Rouse offer a fair-minded overview. They write: \”[W]e believe the weight of the evidence, as reflected in the articles in this issue, indicates that high-quality pre-K programs can indeed play an important role in improving later outcomes, particularly for children from more disadvantaged families. At the same time, significant questions remain.\”

Discussions of early childhood education often starts out with iconic programs like the Perry Preschool Program and the Abecedarian program. Both chose a group of children and provided services randomly to about half the group. Both had long-term follow-up into adulthood. Both found substantial gains in both academic and broader life outcomes. And frankly, I\’m  not sure that either one is especially relevant to the practical challenges and questions that early childhood education faces today.

The Perry Preschool Program in the Ypsilanti, Michigan, had a sample size of 123 students, of whom about half receiving services , from 1962-67. The Carolina Abecedarian program in Chapel Hill, North Carolina, inclueed 111 total students, of whom a randomly selected half received services.  beween 1972 and 1982. These are high quality studies, well-designed and with excellent follow-up. They were also done  four or five decades ago. The sample sizes were small. The assistance was intensive: for example the Perry Program included half-day day care and weekly home visits for 30 weeks per year, while the Abecedarian program was full-time day care for 50 weeks per year. The parents were quite disadvantaged in terms of being low income and low education (which in the mid-1960s in particular was before Medicaid and even widespread kindergarten). The alternative early enrichment options for children from disadvantaged families back at this time were much less available than in more recent years.

The practical difficulty is that extrapolating these kinds of small-scale pilot program with a few dozen students up to to a city-wide programs, much less up to statewide or national programs, is an enormous leap, and the results have not always been encouraging. For example, I wrote a few years back that \”Head Start is Failing Its Test\” (January 29, 2013). Starting in 2002, a nationally representative sample of 5,000 students was randomly assigned to Head Start, or not. The study found that academic effects of being assigned had faded out by third grade, and there were also no meaningful effects in \”cognitive, social-emotional, health and parenting practices.\”

A more recent discouragement comes from a study in Tennessee, which Ron Haskins and Jeanne Brooks-Gunn discuss \”Trouble in the Land of Early Childhood Education?\” (Future of Children Policy Brief, Fall 2016).  They write; \”A recent evaluation has roiled the field of early childhood education with the finding that by the time they reached third grade, children who participated in Tennessee’s statewide pre-K program had worse attitudes toward school and poorer work habits than children who didn’t.\” Of course, when the Tennessee study showed positive results of pre-K in  kindergarten, it was hailed as a high-quality program and a well-designed study, but when it showed undesirable results by third grade, it was then criticized as a low-quality program and a poorly designed study.  But the

Other more recent studies in places like Tulsa, Oklahoma,  and Boston, Massachusetts, have found more positive results. In the overview essay in the Future of Children issue, here\’s how Brooks-Gunn, Markman-Pithers, and Rouse describe the overall evidence:

At the end of most evaluated [early childhood education] programs, researchers find effects on school achievement, though these effects diminish over elementary school. When program effects are large, they tend to be maintained into elementary school, though they are smaller than the initial impacts. At the same time, we see long-term effects on adult outcomes—for example, a reduction in crime or the completion of more schooling. It’s puzzling that during elementary school, the achievement-test scores of children who attended prekindergarten converge with the test scores of children who did not, a phenomenon commonly called fadeout. Studies document that those who participate in a pre-K program have a significant advantage in kindergarten in terms of educational achievement. But those assigned to the control group tend to catch up in the first through third grades; in most evaluations, more than half the difference between the two groups disappears by the end of first grade.

This conclusion is similar to that reached by Greg J. Duncan and Katherine Magnuson offer a broader and modestly more hopeful angle in their paper, \”Investing in Preschool Programs,\” in the Spring 2013 issue of the Journal of Economic Perspectives. I offered some discussion of their findings
in \”Preschool for At-Risk Children, Yes; Universal Preschool, Maybe Not\” (May 23, 2013).

This kind of finding is why Brooks-Gunn, Markman-Pithers, and Rouse refer to significant questions that remain to be answered. Why do the academic effects of early childhood education so often fade out? Is it lack of lack of follow-up in schools? The importance of peer effects as student who received pre-K assistance are blended in later grades with those who do not? Maybe the pre-K programs themselves vary in some way? Maybe the benefits of such programs are non-cognitive–but noncognitive skills can be quite important. Here are some of their thoughts about ongoing issues.

Perhaps the problem is too much focus in pre-K programs on \”constrained\” skills, where \”constrained skills (such as phonemic awareness and letter knowledge) and unconstrained skills (such as knowledge of the world). Young children are typically taught constrained skills, which are associated with success until second or third grade. Beyond third grade, however, mastery of comprehension is associated much more with unconstrained skills.\”

Perhaps there should be more explicit emphasis on noncognitive skills like executive function: \”Executive function includes such abilities as attention, working memory, and inhibitory control— all of which are associated with cognitive and behavioral outcomes for both children and adults. Raver and Blair offer research to show that the development of executive function before children enter elementary school predicts their early math and reading skills. The authors also review promising individual and classroom interventions to improve executive function. Research on how to integrate the learning of memory, attention, and planning into the classroom is just beginning …\”

Perhaps some of the issue involves the qualifications and pay of pre-K teachers: \”The proportion of pre-K teachers who have a bachelor’s degree and certification for teaching early childhood education has increased over the past 15 years. Still, far fewer pre-K teachers than kindergarten teachers are college graduates, owing to differences in requirements. State and city pre-K programs usually require teachers to hold a bachelor’s degree, which has led to disparities between pre-K teachers in programs administered by public school systems and those in Head Start and community programs. Wage differentials are also high. Indeed, many pre-K teachers experience financial hardship and lack health insurance. …  Until we pay more attention to the links between training and classroom interactions, we can’t evaluate the efficacy of current training and education programs. The same is true for implementing curricula in the classroom.\”

Perhaps some of the issue is the extent to which parental involvement is encouraged: \”One of the goals of Head Start and other pre-K programs is to provide support, information, and even instruction to parents in the context of prekindergarten. In fact, being in favor of involving parents in their children’s pre-K programs seems much like supporting motherhood and apple pie. But even though everyone believes such involvement is necessary, we know little about whether it makes the programs more effective. In fact, Katherine Magnuson and Holly Schindler report that when parenting programs attached to pre-K programs have been evaluated, many have proven to be ineffective. But programs that target specific competencies are more likely to have benefits, especially those that help parents deal with their children’s behavior problems. Also, a few programs targeting mothers’ literacy and reading have been effective.\”

The  promise of early childhood education focused on the most disadvantaged children is real. There are dozens of studies that have found positive effects. But the practical problem of designing and implementing programs to work at larger scale are also very real. As Brooks-Gunn, Markman-Pithers, and Rouse note: \” We also need a better understanding of how to take high-quality programs to scale—the most relevant example being the rollout of city- and state-level pre-K programs.\”

Adam Smith\’s Watch-Making Example: Technological Progress Before the Industrial Revolution

The history of technological progress is sometimes told as \”nothing much worth mentioning until the Industrial Revolution.\” But writing in the years before 1776, Adam Smith would not have agreed. Although his example of the division of labor in the pin-making industry near the start of The Wealth of Nations gets more attention, and deservedly so, near the end of Book I of TWN there is also some discussion of of technological progress. In particular, Smith makes a strong claim that the real price of watches may have fallen by 95% during the previous century. In the November 2016 Quarterly Journal of Economics, Morgan Kelly and Cormac Ó Gráda build up data on the price of watches from 1685 to 1810 by using the reported value of stolen watches in criminal trials the Old Bailey in London during this time. Here, I\’ll first review Smith\’s comments, and then offer a brief overivew of the Kelly and Ó Gráda essay, \”Adam Smith, Watch Prices, and the Industrial Revolution,\” (131: 4, pp. 1727-1752). The QJE is not freely available online, but many readers will have access through library subscriptions. 

The relevant discussion from Adam Smith occurs at the tail end of Book 1, in Part III of Chapter 11. Here, I\’ll quote from the ever-useful version of The Wealth of Nations available online at Library of Economics and Liberty website. Smith doesn\’t use the modern terminology of \”productivity\” or \”technology,\” but phrases his argument in terms of \”the natural effect of improvement.\” He writes (paragraph 240 and following):

It is the natural effect of improvement, however, to diminish gradually the real price of almost all manufactures. That of the manufacturing workmanship diminishes, perhaps, in all of them without exception. In consequence of better machinery, of greater dexterity, and of a more proper division and distribution of work, all of which are the natural effects of improvement, a much smaller quantity of labour becomes requisite for executing any particular piece of work; and though, in consequence of the flourishing circumstances of the society, the real price of labour should rise very considerably, yet the great diminution of the quantity [of labor] will generally much more than compensate the greatest rise which can happen in the price. …

This diminution of price has, in the course of the present and preceding century, been most remarkable in those manufactures of which the materials are the coarser metals. A better movement of a watch, than about the middle of the last century could have been bought for twenty pounds, may now perhaps be had for twenty shillings. In the work of cutlers and locksmiths, in all the toys which are made of the coarser metals, and in all those goods which are commonly known by the name of Birmingham and Sheffield ware, there has been, during the same period, a very great reduction of price, though not altogether so great as in watch-work. It has, however, been sufficient to astonish the workmen of every other part of Europe, who in many cases acknowledge that they can produce no work of equal goodness for double, or even for triple the price. There are perhaps no manufactures in which the division of labour can be carried further, or in which the machinery employed admits of a greater variety of improvements, than those of which the materials are the coarser metals. …

Both in the coarse and in the fine woollen manufacture, the machinery employed was much more imperfect in those ancient, than it is in the present times. It has since received three very capital improvements, besides, probably, many smaller ones of which it may be difficult to ascertain either the number or the importance. The three capital improvements are: first, The exchange of the rock and spindle for the spinning-wheel, which, with the same quantity of labour, will perform more than double the quantity of work. Secondly, the use of several very ingenious machines which facilitate and abridge in a still greater proportion the winding of the worsted and woollen yarn, or the proper arrangement of the warp and woof before they are put into the loom; an operation which, previous to the invention of those machines, must have been extremely tedious and troublesome. Thirdly, The employment of the fulling mill for thickening the cloth, instead of treading it in water. Neither wind nor water mills of any kind were known in England so early as the beginning of the sixteenth century, nor, so far as I know, in any other part of Europe north of the Alps. They have been introduced into Italy some time before.

The consideration of these circumstances may, perhaps, in some measure explain to us why the real price both of the coarse and of the fine manufacture, was so much higher in those ancient, than it is in the present times.

 Was Adam Smith correct in his claim that about a dramatic fall in the price of watches due to improved productivity during the century before the Industrial Revolution took off? Kelly and Ó Gráda write (footnotes omitted):

\”Most recent studies of the Industrial Revolution focus on the sustained innovations in the three sectors of textile spinning, iron making, and steam power that began in Britain in the latter half of the eighteenth century. To one usually well-informed contemporary observer, things appeared quite different. Discussing technological progress in The Wealth of Nations Adam Smith (1976, p. 270) ignores most of the famous inventions in these sectors, and instead chooses a paradigm of technical progress one good that is entirely absent from most current histories of the Industrial Revolution: watches. In fact, Smith makes the notable claim that the price of watches may have fallen by up to 95% over the preceding century, a claim we attempt to evaluate here.

\”To test whether watch prices had been falling steadily and steeply since the late seventeenth century, we use the records of more than 3,200 criminal trials at the Old Bailey court in London from 1685 to 1810. Owners of stolen goods gave the value of the items they had lost. Because watches were frequently stolen, we can reliably track how their value changed through time. Contemporaries divided watches into two categories: utilitarian silver or metal watches and more expensive gold ones. After adjusting for inflation, the price of each type of watch falls steadily by 1.3% a year, equivalent to a fall of 75% over a century. If we assume modest rises in the quality in silver watches, so that a watch at the 75th percentile in the 1710s was equivalent to one of median quality in the 1770s, we find an annual fall in real prices of 2% or 87% over a century, not far from what Smith suggests.

\”Most of the cost of a silver watch mechanism was the labor involved in cutting, filing, and assembling the parts, so assuming a constant markup (which is probably valid given the small scale of individual producers and the absence of foreign import penetration before 1815) we can gauge the rise of labor productivity in watchmaking by comparing how the price of a watch fell relative to nominal wages. During the period 1680–1810, real wages were roughly constant, so this rise in labor productivity is similar to the fall in real prices of watches.\”

There is a lively body of economic research looking at economic history in the centuries before 1800, with a focus on processes of growth and change during that time. Finding data to test Smith\’s watch-making example is a vivid illustration. But those looking for bigger-picture discussions of economic movements might also check out these papers from the Journal of Economic Perspectives:

In \”Gifts of Mars: Warfare and Europe’sEarly Rise to Riches,\” Nico Voigtländer and Hans-Joachim Voth discuss what they call the \”First Divergence\” (Fall 2013, 27:4, pp. 165-186), a period from 1400 to 1700 in which the economies of western Europe surged ahead of the rest of the world. They write

\”[W]e argue that Europe’s rise to riches during the First Divergence in this paper, we argue that Europe’s rise to riches during the First Divergence was driven by the nature of its politics after 1350—it was a highly fragmented continent characterized by constant warfare and major religious strife. Our explanation emphasizes two crucial and inescapable consequences of political rivalry: war and death. No other continent in recorded history fought so frequently, for such nd death. No other continent in recorded history fought so frequently, for such long periods, killing such a high proportion of its population. When it comes to destroying human life, the atomic bomb and machine guns may be highly efficient, but nothing rivaled the impact of early modern Europe’s armies spreading hunger ut nothing rivaled the impact of early modern Europe’s armies spreading hunger and disease. …

War therefore helped Europe’s precocious rise to riches because the survivors had more land per head available for cultivation. We argue that the feedback loop from higher incomes to more war and higher land-labor ratios was set in motion by the Black Death in the middle of the 14th century. As surplus incomes over and above subsistence increased, tax revenues surged. These in turn financed near-constant wars on an unprecedented  scale. Wars raised mortality not primarily because of fi cale. Wars raised mortality not primarily because of fighting itself; instead, armies crossing the continent spread deadly diseases such as the plague, typhus, or small pox. The massive, continued destruction of human life that followed led to reduced population pressure. In our view, it was a prime determinant of Europe’s unusually high per capita incomes before the Industrial Revolution.\” 

In \”Seven Centuries of European Economic Growth and Decline,\” Roger Fouquet and Stephen Broadberry look at the body of ongoing research that has sought to build up annual data on output for countries of Europe starting around 1300 (Fall 2015, 29:4, pp. 227-244). They write:

\”The new data shows trends in GDP per capita in the key European economies before the Industrial Revolution, identifying episodes of economic growth in specific countries, often lasting for decades. Ultimately, these periods of growth were not sustained, but they noticeably raised GDP per capita. It also shows that many of these economies experienced periods of substantial economic decline. Thus, rather than being stagnant, pre-nineteenth century European economies experienced a great deal of change.\”