NAFTA Turns 20

The North American Free Trade Agreement turns 20 this year. What\’s has it done? Gary Clyde Hufbauer, Cathleen Cimino, and Tyler Moran discuss the evidence in \”NAFTA at 20: Misleading  Charges and Positive  Achievements,\” written for the Peterson Institute for International Economics (May 2014, Number PB14-13).

For those who don\’t remember, NAFTA was a burning political issue of its time, perhaps the single largest  issue launching Ross Perot as a third-party candidate for president in 1992. In June 1992, before the November election, Perot was for a time polling ahead of both George Bush and Bill Clinton. NAFTA was the first time the U.S. had signed a major trade agreement with a country that wasn\’t another high-income country, and Perot and others made dire predictions about the result. Hufbauer, Cimino, and Moran summarize the political debate:

\”In truth the claims on both sides of the NAFTA issue 20 years ago were overblown. Since the Mexican economy is less than one-tenth the size of the US economy, it is not plausible that trade integration could dramatically shape the giant US economy, even though integration could exert a substantial impact on the relatively small Mexican economy. But exaggeration and sound bites are the weapons of political battle, and trade agreements have been on the front line for two decades. President Bill Clinton, for example, declared that NAFTA would “create” 200,000 American jobs in its first two years and a million jobs in its first five years. Not to be outdone, NAFTA opponents Ross Perot and Pat Choate projected job losses of 5.9 million, driven by what Perot derided as a “giant
sucking sound” emanating from Mexico that would swallow American jobs. Both of these claims turned out to be overblown, especially the one advanced by Perot and Choate.\”

Of course, most economists see trade agreements in a fundamentally different way than these kinds of politically driven predictions, not about adding to the total number of jobs nor subtracting from the total number of jobs in any significant way, but instead about adding competitive pressures that restructure the labor market–adding jobs in some areas and reducing them in others. The restructuring of an economy from lower-productivity to higher-productivity jobs is a fundamental part of what raises the standard of living over time. Public policy has a legitimate role to play in smoothing this transition.

Hufbauer, Cimino, and Moran point out that every year, even when the economy is growing well, about four million Americans lose jobs involuntarily through layoffs or shutdowns. They estimate that about 5% of this job churn, perhaps 200,000 jobs per year, can be attributed to expanded trade with Mexico.  Despite the dire predictions of NAFTA opponents, the U.S. economy boomed through the rest of the 1990s, and after the 2001 recession, the unemployment rate was 5% or less from June 2005 through February 2008, before the Great Recession hit with full force. NAFTA had essentially no effect on the total number of US jobs.

As the Pew surveys on public perception of FTA [free trade agreement] effects on jobs seem to confirm, American workers who owe their jobs to rising exports are usually oblivious to their dependence on foreign sales (in sharp contrast to workers who lose their jobs to rising imports). Based on the increase in US exports to Mexico, averaging $25 billion annually between 2009 and 2013, about 188,000 new US jobs were supported each year by additional sales to Mexico. The figure is almost as large as the jobs lost, but the jobs gained in other sectors pay better. On average, the export-related jobs pay 7 to 15 percent more than the lost import-competing jobs. Th e wage diff erential, while positive, is only part of overall US gains from trade with Mexico. . . .

Amidst the arithmetic of jobs lost and gained, it should not be forgotten that a large portion of two-way trade among the NAFTA economies represents imported intermediates that raise the competitiveness of US firms, enabling them to improve their export profile in world markets. In other words, imports benefit not just US consumers but also US firms that can acquire just the right intermediate components at the right price. . . . [G]ains to the US economy average several hundred thousand dollars per net job lost.

What about effects on wages? Interestingly, it turns out that various studies which do find that U.S. trade with China has had a negative effect on US manufacturing wages, similar studies do not find that trade with Mexico has had a similar effect. They explain:

Possibly the main reason the wage impact between Chinese and Mexican imports differs is that US trade with Mexico is roughly balanced and has a large intraindustry component (e.g., autos and parts shipped in both directions), whereas US trade with China is highly unbalanced and entails very large US imports of consumer goods in exchange for much smaller US exports of capital goods. Because of these features, US imports from Mexico compel considerably less job churn between industrial sectors than US imports from China, and this could account for the difference in estimated wage impact.

Overall, here\’s a picture showing how bilateral merchandise trade between the U.S., Canada, and Mexico has risen since the passage of NAFTA. The blue bars show bilateral imports and exports  before the free trade agreement; the dark green bars show how much trade would have increased if it had just mirrored overall economic growth; and the light green bars show the increase in trade above that level.

Hufbauer, Cimino, and Moran estimate the gains in this way:

\”Ample econometric evidence documents the substantial payoff from expanded two-way trade in goods and services. Through multiple channels, benefits flow both from larger exports and larger imports. … The channels include more efficient use of resources through the workings of comparative advantage, higher average productivity of surviving firms through “sifting and sorting,” and greater variety of industrial inputs and household goods.  … As a rough rule of thumb, for advanced nations, like Canada and the United States, an agreement that promotes an additional $1 billion of two-way trade increases GDP by $200 million. For an emerging country, like Mexico, the payoff ratio is higher: An additional $1 billion of two-way trade probably increases GDP by $500 million. Based on these rules of thumb, the United States is $127 billion richer each year thanks to “extra” trade growth, Canada is $50 billion richer, and Mexico is $170 billion richer. For the United States, with a population of 320 million, the pure economic payoff is almost $400 per person.\”

As we look around the globalizing economy today, one emerging pattern is regional mixtures of the technology and design skills of high-income countries with the manufacturing skills of lower income economies. We see Factory Asia, with Japan and Korea partnering with China, Thailand, Indonesia, and others. We see Factory Europe, with Germany in particular partnering with countries of eastern Europe. And we see Factory North America, with the U.S. and Canada partnering with Mexico. In a sense, the NAFTA agreement back in 1994 was a test of the willingness of the U.S. economy to embrace–at least hesitantly!–the future of a globalizing economy. As the bulk of the world economy continues its shift toward middle income economies, the U.S. economy will face a continual series of adjustments to the globalizing  economy in the decades ahead.

Chronic Disease

The definition of \”chronic disease\” is a little loose, but it generally refers to persistent conditions that for the population as a whole can often be addressed with preventive measures, lifestyle changes, or prescription drugs. In \”Checkup Time: Chronic Disease and Wellness in America,\” a January 2014 report published by the Milken Institute, Anusuya Chatterjee, Sindhu Kubendran, Jaque King, and Ross DeVol point out: \”According to the Centers for Disease Control and Prevention (CDC), chronic illness affects one of every two adults in the U.S., and they are responsible for 75 percent of health-care costs.\”

Their report estimates costs for five most common chronic diseases: cancer, diabetes, heart disease, hypertension, and stroke. One set of costs are treatment costs; a much bigger cost is the reduction in GDP when people are unable to work to their usual productivity, or at all. Of course, the measure here underestimates the full social cost because it does not seek to place a monetary value on the costs of death and suffering.

Efforts to reduce the  burden of chronic disease have had some successes. As the Milken report notes:

\”Strategies to battle heart disease have had the most success in lowering prevalence and easing the economic burden. Much of the credit goes to public anti-smoking initiatives. …  Learning from that, we can be optimistic that aggressive campaigns to address obesity, improve nutrition, and motivate physical activity will bear fruit in the future. Simple physical movement not only helps to control weight but has been effective in cutting cholesterol levels. Keeping bad cholesterol in check through exercise and nutrition can restrain the menaces of stroke and hypertension. . . . More personalized treatment before disease onset can reduce incidence, and better disease management can curtail the need for sudden visits to expensive sites of services, such as emergency rooms and hospital admissions.The American College of Cardiology and American Heart Association now recommend statins for people at risk of heart disease. If diabetes patients and their doctors manage the disease well, they can avoid potentially grave glycemic events that send costs climbing. \”

Similarly, the Centers for Disease Control notes on its website: \”Four modifiable health risk behaviors—lack of physical activity, poor nutrition, tobacco use, and excessive alcohol consumption—are responsible for much of the illness, suffering, and early death related to chronic diseases.\”

In short, finding ways to deal better with the incidence of chronic disease offers the possibility of a three-way win: 1) better health and improved life expectancies for many 2)  helping to hold down health care costs by avoiding costly episodes of hospitalization; and more speculatively, 3) the possible of building a new job-rich industry with the task of helping those with chronic diseases stick to their daily regimen.

This last point is worth a bit more explication. A chronic disease can be thought of as a set of conditions where the incidence and/or the severity of the condition can be greatly reduced if people f they take the recommended medications, or follow the recommended diet, or do the recommended exercises. However, if a person who already has with a chronic condition doesn\’t follow the day-to-day recommended regimen–say someone with high blood pressure or a diabetic–the result can be a severe and costly episodes of hospitalization. The U.S. health care system is generally quite good at dealing with poor health after it occurs, but traditionally, it has not been primarily focused on prevention, wellness, and helping peoples stick to their health regimens. The Milken report notes:

\”Outside of the medical complex, there is much room for improvement in wellness and illness prevention initiatives. A range of organizations, such as employers, local government agencies, and nonprofits, can promote healthy living through wellness programs. Integrating exercise, nutrition awareness, and other good-for-you practices is the cornerstone of preventing chronic disease and reducing health-care costs.\”

Personal habits and public policies regarding cleanliness changed so much in the 19th and into the early 20th century that historians sometimes write of a \”sanitation revolution,\” which led to dramatic improvements in public health. It\’s time to ramp up a \”chronic disease revolution,\” which would include the health care system but also reach beyond it. Modern information technology, and the coming arrival of the \”Internet-of-things\” will open up new possibilities here. A pillbox could be wired into the Internet, and if it isn\’t opened at the appropriate times during the day, the person would receive a text or email, and then a phone call, and then maybe a personal visit of reminder. It\’s now possible for a home machine to take a small blood sample from a diabetic, analyze that sample, and send in the results. Information technology makes it much easier to have interactive systems that offer reminders about diet or exercise. The benefits from improved management of chronic conditions is potentially very large.

GDP Snapshots from the International Comparison Project

Here\’s a snapshot of per capita GDP just published by the World Bank in \”Purchasing Power Parities and Real Expenditures of World Economies: Summary of Results and Findings of the 2011 International Comparison Program.\” I\’ll say more about how this calculation is done below. But first, consider the overall pattern. The colored vertical slices are countries, ranked in order of their per capita GDP, although only some of the countries are labelled. The horizontal axis shows the share of global population in each country–adding up 100% of global population. The vertical axis shows per capita GDP for that country. Thus, India and China show up as wide countries, reflecting their large populations, but below the world average of $13,460 for per capita GDP.

As another viewpoint on this data, consider the 12 largest economies of the world. It\’s no surprise to see the U.S. and China at the top of the list, but did you know that India is now the third-largest economy in the world, surpassing Japan and Germany? Indeed, six of the 12 largest economies in the world are now \”middle-income\” countries, shown in boldface type, not high-income countries.  The final column shows the rankings on a per capita basis. By this measure, the U.S. ranks behind a number of tiny economies like Qatar, United Arab Emirates, Luxembourg, and Macao, which have populations so small that they don\’t show up as vertical bars on the chart above. The middle column, the \”exchange-rate based\” measure, will be discussed below.

Here\’s one more angle on countries in the global economy. The top panel on \”Expenditure Share\” shows, for example, that 50.3% of global GDP is in high-income economies, 48.2% is in middle-income economies, and just 1.5% is in low-income economies.  The bottom panel shows the comparisons on a per-person basis: for example, GDP per capita is about $40,000 in the high-income countries, $9,000 in the middle income countries; and $1,800 in the low-income countries.

How are these kinds of calculations carried out? To compare the size of national economies, each of which is measured in its own home currency, you need an exchange rate. One obvious option is to use the market exchange rate, but there are two substantial difficulties with this approach.

One difficulty is that exchange rates can move quite a bit over a few months or a year. For example, a euro was was  worth $1.46 in May 2011, $1.21 in July 2012, and $1.35 in February 2013. If you converted the GDP of the euro-zone into US dollars using the market exchange rate, it would have looked as if the euro-zone GDP had a depression-style nosedive during the year that the value of the euro fell, and then had an epic boom in the six months as the value of the euro rose. But that conclusion would have been wrong, because was based on the fluctuating market-based foreign exchange rate, not on actual changes in what was being produced and consumed in the euro-zone economy. Of course, even more severe movements in market exchange rates–like the value of the Argentinian peso falling from $0.25 in early 2011 to about $0.12 now–would be even more misleading if used to draw conclusions about changes in actual GDP.

The other difficulty is that what it costs to buy a certain item in a high-income economy like the United States may be quite a bit different–and often higher–than what it costs to buy the same item in a low-income country. As a result, income in a low-income country can often buy more,which makes comparisons to income in high-income countries treacherous. This is why in the middle column of the table above, the share of of middle-income economies in the global economy looks so much smaller when calculated using market exchange rates.

To overcome the problems with using market-based exchange rates, a common approach since the 1970s has been to used \”purchasing power parity\” exchange rates determined by the International Comparison Project, which is now part of the World Bank. A PPP exchange rate is based on an effort to figure out the \”purchasing power\” of a currency in terms of what that currency can buy. Calculating the PPP exchange rate is now done once every six years, because it is  an enormous project, involving the collection of a wide range of price data in 199 different countries, trying to adjust for differences in quality, and then compiling it all into comparable price indexes. The just-released report is based on data for 2001. The report explains:

\”PPPs are price relatives that show the ratio of the prices in national currencies of the same good or service in different economies. For example, if the price of a hamburger in France is €4.80 and in the United States it is $4.00, the PPP for hamburgers between the two economies is $0.83 to the euro from the French perspective (4.00/4.80) and €1.20 to the dollar from the U.S. perspective (4.80/4.00). In other words, for every euro spent on hamburgers in France, $0.83 would have to be spent in the United States to obtain the same quantity and quality—that is, the same volume—of hamburgers. Conversely, for every dollar spent on hamburgers in the United States, €1.20 would have to be spent in France to obtain the same volume of hamburgers. To compare the volumes of hamburgers purchased in the two economies, either the expenditure on hamburgers in France can be expressed in dollars by dividing by 1.20 or the expenditure on hamburgers in the United States can be expressed in euros by dividing by 0.83.

PPPs are calculated in stages: first for individual goods and services, then for groups of products, and finally for each of the various levels of aggregation up to GDP. PPPs continue to be price relatives whether they refer to a product group, to an aggregation level, or to GDP. In moving up the aggregation hierarchy, the price relatives refer to increasingly complex assortments of goods and services. Thus, if the PPP for GDP between France and the United States is €0.95 to the dollar, it can be inferred that for every dollar spent on GDP in the United States, €0.95 would have to be spent in France to purchase the same volume of goods and services. Purchasing the same volume of goods and services does not mean that the baskets of goods and services purchased in both economies will be identical. The composition of the baskets will vary between economies and reflect differences in tastes, cultures, climates, price structures, product availability, and income levels, but both baskets will, in principle, provide equivalent satisfaction or utility.\”

Clearly, these PPP exchange rates for 199 economies have a healthy dose of arbitrariness and judgement. Indeed, in 2010 Angus Deaton devoted his Presidential Address to the American Economic Association (freely available on-line here) to detailing the \”weak theoretical and empirical foundations\” of such  measurements. To some extent, such problems may be unavoidable: it\’s surely a Herculean task to calculate and defend a single monetary measure that can compare average standard of living in, say, the United States, Japan, China, India, Brazil, Indonesia, Egypt and Nigeria, along with more than 100 other countries. But the undoubted imperfections of economic statistics don\’t make such statistics meaningless. It only means they should be interpreted with a skeptical recognition that they are estimates that can have substantial margins of error.

Patterns of U.S. Imprisonment

When it comes to locking people up, the U.S. is an outlier among high-income countries. Melissa S. Kearney and Benjamin H. Harris lay out \”Ten Economic Facts about Crime and Incarceration in the United States\” in a paper written for the Hamilton Project at the Brookings Institution. Here\’s a sampling.

As a starting point, the incarceration rate for the U.S. is vastly higher than for the usual comparison group of high income countries–the number of inmates per 100,000 population is about six times higher in the U.S. than the average for OECD countries.

The U.S. has historically tended to be above-average in its incarceration rates, but this extreme outlier status is relatively recent. Back in the 1970s, for example, the U.S. incarceration rate was in the range of 250 per 100,000 people. By about 2005, the incarceration rate had more than tripled.
 

Of course, a tripling of the incarceration rate means roughly a tripling of the costs of running prisons, too–with a total cost now at about $80 billion per year.

There seems to me solid evidence that the rise in U.S. incarceration rates in the 1980s and 1990s are part of what helped to bring down the crime rate during that time. But the social costs of imprisonment are much larger than the bills paid by government: they include the reduction in future income earned after being released from prison, the costs of loss of freedom to those who are imprisoned, and costs that absence imposes on families and friends. With the monetary and nonmonetary costs in mind, it seems to me that the U.S. has been resorting to imprisonment far too easily for those who have committed non-violent offenses. If the goal is to continue the reduction in crime rates, finding a way on the margin to spend less on prisons and more on police is probably a productive tradeoff.

For some earlier posts on U.S. rates of imprisonment, see \”Reducing the Federal Prison Population,\” (November 8, 2013),  \”Too Much Imprisonment\” (November 30, 2011) and \”U.S. Imprisonment in International Context: What Alternatives?\” (May 31, 2012).

Mark Gertler on Financial Crisis Dynamics

David A. Price interviews Mark Gertler in Econ Focus, published by the Federal Reserve Bank of Richmond (Fourth Quarter 2013, pp. 32-36), mainly focusing on the dynamics of financial crisis ad th Great Recession. Here are some of Gertler\’s comments:

On how he looks back at the causes of the financial crisis of 2007-2008: 

I liken the crisis to 9/11; that is, there was an inkling that something bad could happen. I think there was some sense it was going to be associated with all the financial innovation, but just like with 9/11, we couldn’t see it coming. When we look back, we can piece everything together and make sense of things, but what we didn’t really understand was the fragility in the shadow banking system, how it made the economy very vulnerable. I always think of the Warren Buffet line, “You don’t know who’s naked until you drain the swimming pool.” That’s sort of what happened here. I think when we look back on the crisis, we can explain most of what happened given existing theory. It’s just we couldn’t see it at the time. 

On the concept of \”financial accelerators\” that Gertler developed with Ben Bernanke and Simon Gilchrist: 

That’s what we wanted to capture with the financial accelerator, that is, the mutual feedback between the real sector and the financial sector. We also wanted to capture the primary importance of balance sheets — when balance sheets weaken, that causes credit to tighten, leading to downward pressure on the real economy, which further weakens balance sheets. I think that’s what helped to develop the concept of financial accelerators one saw in the financial crisis. . . .Then we found some other implications, like the role of credit spreads: When balance sheets weaken, credit spreads increase, and credit spreads are a natural indicator of financial distress. And again, you saw something similar
in the current crisis — with a weakening of the balance sheets of financial institutions and households, you saw credit spreads going up, and the real economy going down.

I didn’t speak to Bernanke a lot during the height of the crisis. But one moment I caught him, asked him how things were going, and he said, “Well, on the bright side, we may have some evidence for the financial accelerator.”

 On the Federal Reserve holding high levels of excess reserves: 

The way I think about it is that we had a collapse of the shadow banking system, a drastic shrinkage of the shadow banking system. What were shadow banks doing? They were holding mortgage-backed securities and issuing short-term debt to finance them. What’s happened is that that market has moved to the Fed. The Fed now is acting as an investment bank, and it’s taking over those activities. Instead of Lehman Brothers holding these mortgage-backed securities, the Fed is. And the Fed is issuing deposits, if you will, against these securities, the same way these private financial institutions did. It’s
easier for the Fed, because it can issue essentially risk-free government debt, and these other institutions couldn’t. . . .It’s possible, as interest rates go up, that the Fed could take some capital losses, as private financial institutions do. But the beauty of the Fed is it doesn’t have to mark to market; it can hold these assets until maturity, and let them run off. So I’m in a camp that thinks there’s been probably a little too much preoccupation with the size of the balance sheet.

On the state of knowledge about optimal capital ratios: 

[W]hat do we do ex ante before a crisis? How should regulation be designed? That’s a huge question that we still haven’t figured out. For example, what’s the optimal capital ratio for a financial institution? . . . I’m reminded of a comment Alan Blinder makes. There are two types of research: interesting but not important, and incredibly boring but important. And figuring out optimal capital ratios fits in the latter category. The reality is that we don’t have definitive empirical work,  and we don’t have definitive theory that gives us a clear answer.

Gary Becker and the Time Constraint

Like every student of economics, I frequently find myself confronting the objection that economics is built on an unrealistic and unpalatable assumption that people should be viewed as selfish and rational. While some economic models do take such an approach as a simplified starting-point for analysis, that assumption is not the fundamental starting point of the economic approach. Instead, the starting point for economic analysis is that we live in a world of scarcity–most fundamentally a limit on the time available to us–and so we have no alternative but to make choices. The famous economist Gary Becker, whose time ran out when he died earlier this week, made this point at the start of his 1992 Nobel prize lecture

\”My research uses the economic approach to analyze social issues that range beyond those usually considered by economists. . . . Unlike Marxian analysis, the economic approach I refer to does not assume that individuals are motivated solely by selfishness or gain. It is a method of analysis, not an assumption about particular motivations. Along with others, I have tried to pry economists away from narrow assumptions about self interest. Behavior is driven by a much richer set of values and preferences. 

The analysis assumes that individuals maximize welfare as they conceive it, whether they be selfish, altruistic, loyal, spiteful, or masochistic. Their behavior is forward-looking, and it is also consistent over time. In particular, they try as best they can to anticipate the uncertain consequences of their actions. Forward-looking behavior, however, may still be rooted in the past, for the past can exert a long shadow on attitudes and values. Actions are constrained by income, time, imperfect memory and calculating capacities, and other limited resources, and also by the available opportunities in the economy and elsewhere. These opportunities are largely determined by the private and collective actions of other individuals and organizations.

Different constraints are decisive for different situations, but the most fundamental constraint is limited time. Economic and medical progress have greatly increased length of life, but not the physical flow of time itself, which always restricts everyone to twenty-four hours per day. So while goods and services have expended enormously in rich countries, the total time available to consume has not. Thus, wants remain unsatisfied in rich countries as well as in poor ones. For while the growing abundance of goods may reduce the value of additional goods, time becomes more valuable as goods become more abundant. Utility maximization is of no relevance in a Utopia where everyone’s needs
are fully satisfied, but the constant flow of time makes such a Utopia impossible.\”

Becker\’s Nobel lecture goes on to review his work in some key areas: discrimination, crime and punishment, formation of human capital, and structure of families. But here, I\’d point out a different implication of Becker\’s view.

There is a long-standing prediction, traceable at least as far back as John Stuart Mill\’s Principles of Political Economy in 1848  (for example, here), which looks forward to the end of scarcity. The argument is that someday–perhaps not too far into the future–there will be \”enough\” economic growth. When that time arrives, people will able to work less or not at all, while enjoying a sufficiency of the material goods that what want along with the time to pursue higher goals. When this time arrives, Mill wrote: \”There would be as much scope as ever for all kinds of mental culture, and moral and social progress; as much room for improving the Art of Living, and much more likelihood of its being improved, when minds ceased to be engrossed by the art of getting on. Even the industrial arts might be as earnestly and as successfully cultivated, with this sole difference, that instead of serving no purpose but the increase of wealth, industrial improvements would produce their legitimate effect, that of abridging labour.\”

When Becker points out that that the starting point for economic analysis is scarcity, that the most fundamental embodiment of scarcity is the limits of time, and that time becomes more valuable as per capita incomes rise, his argument implies that economic growth will never render the economic analysis of tradeoffs obsolete.

Big Data in Political Campaigns

How does the collection and use of big data work in political campaigns? David W. Nickerson and Todd Rogers pull back the curtain and offer a glimpse of what\’s been  happening in \”Political Campaigns and Big Data,\” which appears in the Spring 2014 issue of the Journal of Economic Perspectives.  Nickerson is a Notre Dame professor of political science who was \”`Director of Experiments\’ in the Analytics Department in the 2012 re-election campaign of President  Obama.\”  Rogers is a professor of public policy at Harvard\’s Kennedy school who \”co-founded the Analyst Institute, which uses field experiments and behavioral science insights to develop best practices in progressive political communications.\” They write:

Over the past six years, campaigns have become increasingly reliant on analyzing large and detailed datasets to create the necessary predictions. While the adoption of these new analytic methods has not radically transformed how campaigns operate, the improved efficiency gives data-savvy campaigns a competitive advantage. This has led the political parties to engage in an arms race to leverage ever-growing volumes of data to create votes. This paper describes the utility and evolution of data in political campaigns. The techniques used as recently as a decade or two ago by political campaigns to predict the tendencies of citizens appear extremely rudimentary by current standards.

Like all articles in JEP back to the first issue in 1987, it is freely available courtesy of the American Economic Association. (Full disclosure: I\’ve been managing editor of JEP back to that first issue in 1987.) Here are some points from their essay that jumped out at me.

The starting point for gathering data on potential voters are the publicly available files of official voters maintained in each state. As Nickerson and Rogers write: \”The official voter file contains
a wide range of information. In addition to personal information such as date of birth and gender, which are often valuable in developing predictive scores, voter files also contain contact information such as address and phone.\” In addition, while the files of course don\’t record who anyone voted for, they do show whether people voted, and how they voted–say, on Election Day, or using some form of early or absentee voting.

This data can then be merged with data from other sources. Census data is available for the average of a voting precinct, showing \”the average household income, average level of education, average number of children per household, and ethnic distribution\” for that precinct.

Additional data can be purchased from commercial firms. Nickerson and Rogers report that the most cost-effective data to purchase is updated phone numbers (because the phone numbers in the state voter registration files are often outdated after a few years) as well as data about \”estimated years of education, home ownership status, and mortgage information.\” Other information, while available, isn\’t cost-effective to buy. They write: \” In contrast, information on magazine subscriptions, car purchases, and other consumer tastes are relatively expensive to purchase from vendors, and also tend to be available for very few individuals. Given this limited coverage, this data tends not to be useful in constructing predictive scores for the entire population—and so campaigns generally avoid or
limit purchases of this kind of consumer data.\”

Finally, a major source of voter information is provided by voters themselves when they sign up at a candidate\’s website or party website. Not only do people provide information directly, but the campaign can also keep track of what sorts of topics or messages cause people to respond by clicking on a link or donating money, so much can be learned about people in that way.

These sources of information have some interesting implications. Campaigns know more about those who vote, and who are politically active, than about those who don\’t vote regularly or who are not politically active. Campaigns also tend to know more about their own supporters.  Nickerson and Rogers write: \”To the extent that predictive scores are useful and reveal true unobserved characteristics about citizens, it means that multiple organizations will produce predictive scores that recommend targeting the same sets of citizens. For example, some citizens might find themselves
contacted many times, while other citizens—like those with low turnout behavior scores in 2012—might be ignored by nearly every campaign.\”

After collecting and collating and coordinating all this data, the question is how to use it. Nickerson and Rogers point out that focusing on those who are already very likely to vote for you, or focusing on those who are already very likely to vote against you, tends to be a waste of money. Thus, one way that data can make a campaign more cost-effective is that it can minimize spending money on those who are unpersuadeable or who are already persuaded. This also reduces the risk of \”backlash,\” in which attempts to encourage voting for your candidate revs up voters for the other side.

Another possible advantage is that campaigns can run small-scale experiments  about what messages or actions are likely to cause a certain slice of voters to take an action–clicking on a link, volunteering time, putting up a sign, giving money–that is likely to be correlated with voting for the candidate later on. When small-scale experiments have shown what steps are likely to be effective, then the approach can be used at larger scale. How effective can such steps be? They write: \”Suppose a campaign’s persuasive communications has an average treatment effect of 2 percentage points—a number
on the high end of persuasion effects observed in high-expense campaigns: that is, if half of citizens who vote already planned to vote for the candidate, 52 percent would support the candidate after the persuasive communication.\”

Nickerson and Rogers point out in their conclusion that while using big data to drive campaigning, in a very real way, makes traditional boots-on-the-ground campaigning more important than ever. After all, the bottom line of the campaign is still to push for more of your voters to turn out. Big data can help a campaign allocate resources more cost-effectively, but campaign still needs to do the actual work.

\”The improved capability to target individual voters offers campaigns an opportunity to concentrate their resources where they will be most effective. This power, however, has not radically transformed the nature of campaign work. One could argue that the growing impact of data analytics in campaigns has amplified the importance of traditional campaign work. . . . Professional phone interviews are still used for message development and tracking, but they are also essential for developing predictive scores of candidate support and measuring changes in voter preferences in randomized experiments. Similarly, better targeting has made grassroots campaign tactics more efficient and therefore more cost competitive with mass communication forms of outreach. Volunteers still need to persuade skeptical neighbors, but they are now better able to focus on persuadable neighbors and use messages more likely to resonate. This leads to higher-quality interactions and (potentially) a more pleasant volunteer experience. So while savvy campaigns will harness the power of predictive scores, the scores will only help the campaigns that were already effective.\”

Work Philosophy from Gabriel García Márquez

I\’m often at least a few beats behind the tune on news that doesn’t involve economics or policy, so I just heard a few days ago that Gabriel García Márquez who won the Nobel Prize in Literature in 1982 for One Hundred Years of Solitude and other works, died on April 17. I could see the genius in his work, but it was never among my favorites: the magic in his “magic realism” felt to me a little too contrived and mannered. But I was reading in English translation, not in Spanish, and what do I know about literature, anyway?

I do have a quotation from Marquez up on my office door that conveys a home truth about my own work life. It\’s from an interview with him that was published in the Boston Review (March-April 1983, pp. 26-27), and later reprinted in the 2006 collection Conversations with Gabriel Garcia Marquez, edited by Gene H. Bell-Villada (p. 137). He was asked about how he felt about One Hundred Years of Solitude being used as a required reading in college courses and cited by academics. Here’s part of his answer:

“On another occasion a sociologist from Austin, Texas, came to see me because he’d grown dissatisfied with his methods, found them arid, insufficient. So he asked me what my own method was. I told him I didn’t have a method. All I do is read a lot, think a lot, and rewrite constantly. It’s not a scientific thing.”

I’m the managing editor of an academic economics journal, and an occasional lecturer and writer. That ethic might serve as a useful motto for editors everywhere.

Farewell to Notes

When the first issue of the American Economic Review, which would become the preeminent research journal in academic economics, was published back in 1911, it devoted 13 pages to \”Notes\”–that is, news about the profession of economics. At a time when the number of academic economists was much smaller, and methods of broad-based communication were much slower, the \”Notes\” included mentions of conferences that had already happened, books that were soon to be published, contributions of historical papers to libraries, even the sabbatical plans for some prominent economists. When I took the job as managing editor of the Journal of Economic Perspectives in 1987, we inherited the \”Notes\” from the AER. But now, after a run of 103 years, the rise of the web means that the time has come to stop publishing of conferences announcements, calls for papers, awards, and the like in a quarterly journal–or indeed on paper at all.  

In the just-released Spring 2014 issue of JEP, I commemorated the occasion with a  \”Farewell to Notes.\” Here are the opening and closing paragraphs:

The great composer Johannes Brahms once remarked: “It is not difficult to compose; but it is incredibly difficult to let the superfluous notes drop under the table” (as quoted in Musgrave and Pascall 1987, p. 138). Here at the Journal of Economic Perspectives, the challenges of composing each issue remain, but the \”Notes” have become superfluous, at least in their paper version.

The “Notes,” as those who lurk in these back pages of JEP know well, announce forthcoming conferences, calls for papers, awards, and the like. However, the Internet has made it obsolete to deliver such information on paper in a quarterly journal. … But as we say farewell to the print version of the “Notes,” a moment of remembrance seems appropriate. The first issue of the American Economic Review, published in 1911, found it worthwhile to devote 13 out of 219 total pages to “Notes.” …

Admittedly, the ending of the “Notes” section as printed within the covers of the Journal of Economic Perspectives doesn’t rank with some of the other great endings, like the revelation of what Citizen Kane meant by “Rosebud”; or “Forget it, Jake, it’s Chinatown”; or “Oh, Auntie Em, there’s no place like home!” But in its own small way, the end of the paper version of the “Notes” after its run of 103 years is one more sign of the remarkable changes in information and communication technology that surround us—and thus worth remarking.

Spring 2014 Journal of Economic Perspectives

The Spring 2014 issue of the Journal of Economic Perspectives is now freely available on-line, courtesy of the publisher, the American Economic Association. Indeed, not only this issue but all previous issues back to 1987 are available. (Full disclosure: I\’ve been the Managing Editor since the journal started, so this issue is #108 for me.) I\’ll probably blog about some of these articles in the next week or two. But for now, I\’ll first list the table of contents, and then below will provide abstracts of articles and weblinks.

Symposium: Big Data

\”Big Data: New Tricks for Econometrics,\”  by Hal R. Varian
\”High-Dimensional Methods and Inference on Structural and Treatment Effects,\” by Alexandre Belloni, Victor Chernozhukov and Christian Hansen
\”Political Campaigns and Big Data,\” by David W. Nickerson and Todd Rogers
\”Privacy and Data-Based Research,\” by Ori Heffetz and Katrina Ligett

Symposium: Global Supply Chains

\”Slicing Up Global Value Chains,\” by  Marcel P. Timmer, Abdul Azeez Erumban, Bart Los, Robert Stehrer and Gaaitzen J. de Vries
\”Five Facts about Value-Added Exports and Implications for Macroeconomics and Trade Research,\” by Robert C. Johnson

Articles and Features

\”Raj Chetty: 2013 Clark Medal Recipient,\” by Martin Feldstein
\”Fluctuations in Uncertainty,\” by Nicholas Bloom
\”The Market for Blood,\” by Robert Slonim, Carmen Wang and Ellen Garbarino
\”Retrospectives: The Cyclical Behavior of Labor Productivity and the Emergence of the Labor Hoarding Concept,\” by Jeff E. Biddle
\”Recommendations for Further Reading,\” by Timothy Taylor
\”Correction and Update: The Economic Effects of Climate Change,\” by Richard S. J. Tol
\”Farewell to Notes,\” by Timothy Taylor

________________________

And here are the abstracts and links:

Symposium: Big Data


\”Big Data: New Tricks for Econometrics,\”  by Hal R. Varian

Computers are now involved in many economic transactions and can capture data associated with these transactions, which can then be manipulated and analyzed. Conventional statistical and econometric techniques such as regression often work well, but there are issues unique to big datasets that may require different tools. First, the sheer size of the data involved may require more powerful data manipulation tools. Second, we may have more potential predictors than appropriate for estimation, so we need to do some kind of variable selection. Third, large datasets may allow for more flexible relationships than simple linear models. Machine learning techniques such as decision trees, support vector machines, neural nets, deep learning, and so on may allow for more effective ways to model complex relationships. In this essay, I will describe a few of thes e tools for manipulating and analyzing big data. I believe that these methods have a lot to offer and should be more widely known and used by economists.
Full-Text Access | Supplementary Materials

\”High-Dimensional Methods and Inference on Structural and Treatment Effects,\” by Alexandre Belloni, Victor Chernozhukov and Christian Hansen

Data with a large number of variables relative to the sample size—\”high-dimensional data\”—are readily available and increasingly common in empirical economics. High-dimensional data arise through a combination of two phenomena. First, the data may be inherently high dimensional in that many different characteristics per observation are available. For example, the US Census collects information on hundreds of individual characteristics and scanner datasets record transaction-level data for households across a wide range of products. Second, even when the number of available variables is relatively small, researchers rarely know the exact functional form with which the small number of variables enter the model of interest. Researchers are thus faced with a large set of potential variables formed by different ways of interacting and transforming the underlying variables. This paper provides an overview of how innovations in \”data mining\” can be adapted and modified to provide high-quality inference about model parameters. Note that we use the term \”data mining\” in a modern sense which denotes a principled search for \”true\” predictive power that guards against false discovery and overfitting, does not erroneously equate in-sample fit to out-of-sample predictive ability, and accurately accounts for using the same data to examine many different hypotheses or models.
Full-Text Access | Supplementary Materials

\”Political Campaigns and Big Data,\”  by David W. Nickerson and Todd Rogers

Modern campaigns develop databases of detailed information about citizens to inform electoral strategy and to guide tactical efforts. Despite sensational reports about the value of individual consumer data, the most valuable information campaigns acquire comes from the behaviors and direct responses provided by citizens themselves. Campaign data analysts develop models using this information to produce individual-level predictions about citizens\’ likelihoods of performing certain political behaviors, of supporting candidates and issues, and of changing their support conditional on being targeted with specific campaign interventions. The use of these predictive scores has increased dramatically since 2004, and their use could yield sizable gains to campaigns that harness them. At the same time, their widespread use effectively creates a coordination game with incomplete information between allied organizations. As such, organizations would benefit from partitioning the electorate to not duplicate efforts, but legal and political constraints preclude that possibility.
Full-Text Access | Supplementary Materials

\”Privacy and Data-Based Research,\” by Ori Heffetz and Katrina Ligett

What can we, as users of microdata, formally guarantee to the individuals (or firms) in our dataset, regarding their privacy? We retell a few stories, well-known in data-privacy circles, of failed anonymization attempts in publicly released datasets. We then provide a mostly informal introduction to several ideas from the literature on differential privacy, an active literature in computer science that studies formal approaches to preserving the privacy of individuals in statistical databases. We apply some of its insights to situations routinely faced by applied economists, emphasizing big-data contexts.
Full-Text Access | Supplementary Materials

Symposium: Global Supply Chains

\”Slicing Up Global Value Chains,\” by Marcel P. Timmer, Abdul Azeez Erumban, Bart Los, Robert Stehrer and Gaaitzen J. de Vries

In this paper, we \”slice up the global value chain\” using a decomposition technique that has recently become feasible due to the development of the World Input-Output Database. We trace the value added by all labor and capital that is directly and indirectly needed for the production of final manufacturing goods. The production systems of these goods are highly prone to international fragmentation as many stages can be undertaken in any country with little variation in quality. We seek to establish a series of facts concerning the global fragmentation of production that can serve as a starting point for future analysis. We describe four major trends. First, international fragmentation, as measured by the foreign value-added content of production, has rapidly increased since the early 1 990s. Second, in most global value chains there is a strong shift towards value being added by capital and high-skilled labor, and away from less-skilled labor. Third, within global value chains, advanced nations increasingly specialize in activities carried out by high-skilled workers. Fourth, emerging economies surprisingly specialize in capital-intensive activities.
Full-Text Access | Supplementary Materials

\”Five Facts about Value-Added Exports and Implications for Macroeconomics and Trade Research,\” by Robert C. Johnson

Due to the rise of global supply chains, gross exports do not accurately measure the amount of value added exchanged between countries. I highlight five facts about differences between gross and value-added exports. These differences are large and growing over time, currently around 25 percent, and manufacturing trade looks more important, relative to services, in gross than value-added terms. These differences are also heterogenous across countries and bilateral partners, and changing unevenly across countries and partners over time. Taking these differences into account enables researchers to obtain better quantitative answers to important macroeconomic and trade questions. I discuss how the facts inform analysis of the transmission of shocks across countries; the mechanics of trade balance adjustments; the impact of frictions on trade; the role of endowments and comparative advantage; and trade policy.
Full-Text Access | Supplementary Materials

Articles and Features

\”Raj Chetty: 2013 Clark Medal Recipient,\” by Martin Feldstein

Raj Chetty is eminently deserving of being awarded the John Bates Clark Medal at the age of 33. His research has transformed the field of public economics. His work is motivated by important public policy issues in the fields of taxation, social insurance, and public spending for education. He approaches his subjects with a creative redefinition of the problems that he studies, and his empirical methods often draw on experimental evidence or unprecedentedly large sets of integrated data. While his work is founded on basic microeconomics, he modifies this framework to take into account behavioral and institutional considerations. Chetty is a prolific scholar. It is difficult to summarize all of Chetty\’s research or even to capture the details of his most significant papers. I have therefore chosen a selection of Chetty\’s important papers dealing with taxation, social insurance, and education that contributed to his selection as the winner of the John Bates Clark Medal.
Full-Text Access | Supplementary Materials

\”Fluctuations in Uncertainty,\” by Nicholas Bloom

Uncertainty is an amorphous concept. It reflects uncertainty in the minds of consumers, managers, and policymakers about possible futures. It is also a broad concept, including uncertainty over the path of macro phenomena like GDP growth, micro phenomena like the growth rate of firms, and noneconomic events like war and climate change. In this essay, I address four questions about uncertainty. First, what are some facts and patterns about economic uncertainty? Both macro and micro uncertainty appear to rise sharply in recessions and fall in booms. Uncertainty also varies heavily across countries—developing countries appear to have about one-third more macro uncertainty than developed countries. Second, why does uncertainty vary during business cycles? Third, do fluctuations in uncertainty affect behavior? Fourth, has higher uncertainty worsened the Great Rec ession and slowed the recovery? Much of this discussion is based on research on uncertainty from the last five years, reflecting the recent growth of the literature.
Full-Text Access | Supplementary Materials

\”The Market for Blood,\” by Robert Slonim, Carmen Wang and Ellen Garbarino

Donating blood, \”the gift of life,\” is among the noblest activities and it is performed worldwide nearly 100 million times annually. The economic perspective presented here shows how the gift of life, albeit noble and often motivated by altruism, is heavily influenced by standard economic forces including supply and demand, economies of scale, and moral hazard. These forces, shaped by technological advances, have driven the evolution of blood donation markets from thin one-to-one \”marriage markets\” in which each recipient needed a personal blood donor, to thick, impersonalized, diffuse markets. Today, imbalances between aggregate supply and demand are a major challenge in blood markets, including excess supply after disasters and insufficient supply at other times. These imbalances are not unexpected given that the blood market operate s without market prices and with limited storage length (about six weeks) for whole blood. Yet shifting to a system of paying blood donors seems a practical impossibility given attitudes toward paying blood donors and concerns that a paid system could compromise blood safety. Nonetheless, we believe that an economic perspective offers promising directions to increase supply and improve the supply and demand balance even in the presence of volunteer supply and with the absence of market prices.
Full-Text Access | Supplementary Materials

\”Retrospectives: The Cyclical Behavior of Labor Productivity and the Emergence of the Labor Hoarding Concept,\” by Jeff E. Biddle

The concept of \”labor hoarding,\” at least in its modern form, was first fully articulated in the early 1960s by Arthur Okun (1963). By the end of the 20th century, the concept of \”labor hoarding\” had become an accepted part of economists\’ explanations of the workings of labor markets and of the relationship between labor productivity and economic fluctuations. The emergence of this concept involved the conjunction of three key elements: the fact that measured labor productivity was found to be procyclical, rising during expansions and falling during contractions; a perceived contradiction with the theory of the neoclassical firm in a competitive economy; and a possible explanation based on optimizing behavior on the part of firms. Each of these three elements—fact, contradiction , and explanation—has a history of its own, dating back to at least the opening decades of the twentieth century. Telling the story of the emergence of the modern labor hoarding concept requires recounting these three histories, histories that involve the work of economists motivated by diverse purposes and often not mainly, if at all, concerned with the questions that the labor hoarding concept was ultimately used to address. As a final twist to the story, the long-standing positive relationship between labor productivity and output in the US economy began to disappear in the late 1980s; and during the Great Recession, labor productivity rose while the economy contracted.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

\”Correction and Update: The Economic Effects of Climate Change,\” by Richard S. J. Tol

Gremlins intervened in the preparation of my paper \”The Economic Effects of Climate Change\” published in the Spring 2009 issue of this journal. In Table 1 of that paper, titled \”Estimates of the Welfare Impact of Climate Change,\” minus signs were dropped from the two impact estimates, one by Plambeck and Hope (1996) and one by Hope (2006). In Figure 1 of that paper, titled \”Fourteen Estimates of the Global Economic Impact of Climate Change,\” and in the various analyses that support that figure, the minus sign was dropped from only one of the two estimates. The corresponding Table 1 and Figure 1 presented here correct these errors. Figure 2 titled,\”Twenty-One Estimates of the Global Economic Impact of Climate Change\” adds two overlooked estimates from before the time of the original 2009 paper and five more recent ones.
Full-Text Access | Supplementary Materials

\”Farewell to Notes,\” by Timothy Taylor
Full-Text Access | Supplementary Materials