GDP Snapshots from the International Comparison Project

Here\’s a snapshot of per capita GDP just published by the World Bank in \”Purchasing Power Parities and Real Expenditures of World Economies: Summary of Results and Findings of the 2011 International Comparison Program.\” I\’ll say more about how this calculation is done below. But first, consider the overall pattern. The colored vertical slices are countries, ranked in order of their per capita GDP, although only some of the countries are labelled. The horizontal axis shows the share of global population in each country–adding up 100% of global population. The vertical axis shows per capita GDP for that country. Thus, India and China show up as wide countries, reflecting their large populations, but below the world average of $13,460 for per capita GDP.

As another viewpoint on this data, consider the 12 largest economies of the world. It\’s no surprise to see the U.S. and China at the top of the list, but did you know that India is now the third-largest economy in the world, surpassing Japan and Germany? Indeed, six of the 12 largest economies in the world are now \”middle-income\” countries, shown in boldface type, not high-income countries.  The final column shows the rankings on a per capita basis. By this measure, the U.S. ranks behind a number of tiny economies like Qatar, United Arab Emirates, Luxembourg, and Macao, which have populations so small that they don\’t show up as vertical bars on the chart above. The middle column, the \”exchange-rate based\” measure, will be discussed below.

Here\’s one more angle on countries in the global economy. The top panel on \”Expenditure Share\” shows, for example, that 50.3% of global GDP is in high-income economies, 48.2% is in middle-income economies, and just 1.5% is in low-income economies.  The bottom panel shows the comparisons on a per-person basis: for example, GDP per capita is about $40,000 in the high-income countries, $9,000 in the middle income countries; and $1,800 in the low-income countries.

How are these kinds of calculations carried out? To compare the size of national economies, each of which is measured in its own home currency, you need an exchange rate. One obvious option is to use the market exchange rate, but there are two substantial difficulties with this approach.

One difficulty is that exchange rates can move quite a bit over a few months or a year. For example, a euro was was  worth $1.46 in May 2011, $1.21 in July 2012, and $1.35 in February 2013. If you converted the GDP of the euro-zone into US dollars using the market exchange rate, it would have looked as if the euro-zone GDP had a depression-style nosedive during the year that the value of the euro fell, and then had an epic boom in the six months as the value of the euro rose. But that conclusion would have been wrong, because was based on the fluctuating market-based foreign exchange rate, not on actual changes in what was being produced and consumed in the euro-zone economy. Of course, even more severe movements in market exchange rates–like the value of the Argentinian peso falling from $0.25 in early 2011 to about $0.12 now–would be even more misleading if used to draw conclusions about changes in actual GDP.

The other difficulty is that what it costs to buy a certain item in a high-income economy like the United States may be quite a bit different–and often higher–than what it costs to buy the same item in a low-income country. As a result, income in a low-income country can often buy more,which makes comparisons to income in high-income countries treacherous. This is why in the middle column of the table above, the share of of middle-income economies in the global economy looks so much smaller when calculated using market exchange rates.

To overcome the problems with using market-based exchange rates, a common approach since the 1970s has been to used \”purchasing power parity\” exchange rates determined by the International Comparison Project, which is now part of the World Bank. A PPP exchange rate is based on an effort to figure out the \”purchasing power\” of a currency in terms of what that currency can buy. Calculating the PPP exchange rate is now done once every six years, because it is  an enormous project, involving the collection of a wide range of price data in 199 different countries, trying to adjust for differences in quality, and then compiling it all into comparable price indexes. The just-released report is based on data for 2001. The report explains:

\”PPPs are price relatives that show the ratio of the prices in national currencies of the same good or service in different economies. For example, if the price of a hamburger in France is €4.80 and in the United States it is $4.00, the PPP for hamburgers between the two economies is $0.83 to the euro from the French perspective (4.00/4.80) and €1.20 to the dollar from the U.S. perspective (4.80/4.00). In other words, for every euro spent on hamburgers in France, $0.83 would have to be spent in the United States to obtain the same quantity and quality—that is, the same volume—of hamburgers. Conversely, for every dollar spent on hamburgers in the United States, €1.20 would have to be spent in France to obtain the same volume of hamburgers. To compare the volumes of hamburgers purchased in the two economies, either the expenditure on hamburgers in France can be expressed in dollars by dividing by 1.20 or the expenditure on hamburgers in the United States can be expressed in euros by dividing by 0.83.

PPPs are calculated in stages: first for individual goods and services, then for groups of products, and finally for each of the various levels of aggregation up to GDP. PPPs continue to be price relatives whether they refer to a product group, to an aggregation level, or to GDP. In moving up the aggregation hierarchy, the price relatives refer to increasingly complex assortments of goods and services. Thus, if the PPP for GDP between France and the United States is €0.95 to the dollar, it can be inferred that for every dollar spent on GDP in the United States, €0.95 would have to be spent in France to purchase the same volume of goods and services. Purchasing the same volume of goods and services does not mean that the baskets of goods and services purchased in both economies will be identical. The composition of the baskets will vary between economies and reflect differences in tastes, cultures, climates, price structures, product availability, and income levels, but both baskets will, in principle, provide equivalent satisfaction or utility.\”

Clearly, these PPP exchange rates for 199 economies have a healthy dose of arbitrariness and judgement. Indeed, in 2010 Angus Deaton devoted his Presidential Address to the American Economic Association (freely available on-line here) to detailing the \”weak theoretical and empirical foundations\” of such  measurements. To some extent, such problems may be unavoidable: it\’s surely a Herculean task to calculate and defend a single monetary measure that can compare average standard of living in, say, the United States, Japan, China, India, Brazil, Indonesia, Egypt and Nigeria, along with more than 100 other countries. But the undoubted imperfections of economic statistics don\’t make such statistics meaningless. It only means they should be interpreted with a skeptical recognition that they are estimates that can have substantial margins of error.

Patterns of U.S. Imprisonment

When it comes to locking people up, the U.S. is an outlier among high-income countries. Melissa S. Kearney and Benjamin H. Harris lay out \”Ten Economic Facts about Crime and Incarceration in the United States\” in a paper written for the Hamilton Project at the Brookings Institution. Here\’s a sampling.

As a starting point, the incarceration rate for the U.S. is vastly higher than for the usual comparison group of high income countries–the number of inmates per 100,000 population is about six times higher in the U.S. than the average for OECD countries.

The U.S. has historically tended to be above-average in its incarceration rates, but this extreme outlier status is relatively recent. Back in the 1970s, for example, the U.S. incarceration rate was in the range of 250 per 100,000 people. By about 2005, the incarceration rate had more than tripled.
 

Of course, a tripling of the incarceration rate means roughly a tripling of the costs of running prisons, too–with a total cost now at about $80 billion per year.

There seems to me solid evidence that the rise in U.S. incarceration rates in the 1980s and 1990s are part of what helped to bring down the crime rate during that time. But the social costs of imprisonment are much larger than the bills paid by government: they include the reduction in future income earned after being released from prison, the costs of loss of freedom to those who are imprisoned, and costs that absence imposes on families and friends. With the monetary and nonmonetary costs in mind, it seems to me that the U.S. has been resorting to imprisonment far too easily for those who have committed non-violent offenses. If the goal is to continue the reduction in crime rates, finding a way on the margin to spend less on prisons and more on police is probably a productive tradeoff.

For some earlier posts on U.S. rates of imprisonment, see \”Reducing the Federal Prison Population,\” (November 8, 2013),  \”Too Much Imprisonment\” (November 30, 2011) and \”U.S. Imprisonment in International Context: What Alternatives?\” (May 31, 2012).

Mark Gertler on Financial Crisis Dynamics

David A. Price interviews Mark Gertler in Econ Focus, published by the Federal Reserve Bank of Richmond (Fourth Quarter 2013, pp. 32-36), mainly focusing on the dynamics of financial crisis ad th Great Recession. Here are some of Gertler\’s comments:

On how he looks back at the causes of the financial crisis of 2007-2008: 

I liken the crisis to 9/11; that is, there was an inkling that something bad could happen. I think there was some sense it was going to be associated with all the financial innovation, but just like with 9/11, we couldn’t see it coming. When we look back, we can piece everything together and make sense of things, but what we didn’t really understand was the fragility in the shadow banking system, how it made the economy very vulnerable. I always think of the Warren Buffet line, “You don’t know who’s naked until you drain the swimming pool.” That’s sort of what happened here. I think when we look back on the crisis, we can explain most of what happened given existing theory. It’s just we couldn’t see it at the time. 

On the concept of \”financial accelerators\” that Gertler developed with Ben Bernanke and Simon Gilchrist: 

That’s what we wanted to capture with the financial accelerator, that is, the mutual feedback between the real sector and the financial sector. We also wanted to capture the primary importance of balance sheets — when balance sheets weaken, that causes credit to tighten, leading to downward pressure on the real economy, which further weakens balance sheets. I think that’s what helped to develop the concept of financial accelerators one saw in the financial crisis. . . .Then we found some other implications, like the role of credit spreads: When balance sheets weaken, credit spreads increase, and credit spreads are a natural indicator of financial distress. And again, you saw something similar
in the current crisis — with a weakening of the balance sheets of financial institutions and households, you saw credit spreads going up, and the real economy going down.

I didn’t speak to Bernanke a lot during the height of the crisis. But one moment I caught him, asked him how things were going, and he said, “Well, on the bright side, we may have some evidence for the financial accelerator.”

 On the Federal Reserve holding high levels of excess reserves: 

The way I think about it is that we had a collapse of the shadow banking system, a drastic shrinkage of the shadow banking system. What were shadow banks doing? They were holding mortgage-backed securities and issuing short-term debt to finance them. What’s happened is that that market has moved to the Fed. The Fed now is acting as an investment bank, and it’s taking over those activities. Instead of Lehman Brothers holding these mortgage-backed securities, the Fed is. And the Fed is issuing deposits, if you will, against these securities, the same way these private financial institutions did. It’s
easier for the Fed, because it can issue essentially risk-free government debt, and these other institutions couldn’t. . . .It’s possible, as interest rates go up, that the Fed could take some capital losses, as private financial institutions do. But the beauty of the Fed is it doesn’t have to mark to market; it can hold these assets until maturity, and let them run off. So I’m in a camp that thinks there’s been probably a little too much preoccupation with the size of the balance sheet.

On the state of knowledge about optimal capital ratios: 

[W]hat do we do ex ante before a crisis? How should regulation be designed? That’s a huge question that we still haven’t figured out. For example, what’s the optimal capital ratio for a financial institution? . . . I’m reminded of a comment Alan Blinder makes. There are two types of research: interesting but not important, and incredibly boring but important. And figuring out optimal capital ratios fits in the latter category. The reality is that we don’t have definitive empirical work,  and we don’t have definitive theory that gives us a clear answer.

Gary Becker and the Time Constraint

Like every student of economics, I frequently find myself confronting the objection that economics is built on an unrealistic and unpalatable assumption that people should be viewed as selfish and rational. While some economic models do take such an approach as a simplified starting-point for analysis, that assumption is not the fundamental starting point of the economic approach. Instead, the starting point for economic analysis is that we live in a world of scarcity–most fundamentally a limit on the time available to us–and so we have no alternative but to make choices. The famous economist Gary Becker, whose time ran out when he died earlier this week, made this point at the start of his 1992 Nobel prize lecture

\”My research uses the economic approach to analyze social issues that range beyond those usually considered by economists. . . . Unlike Marxian analysis, the economic approach I refer to does not assume that individuals are motivated solely by selfishness or gain. It is a method of analysis, not an assumption about particular motivations. Along with others, I have tried to pry economists away from narrow assumptions about self interest. Behavior is driven by a much richer set of values and preferences. 

The analysis assumes that individuals maximize welfare as they conceive it, whether they be selfish, altruistic, loyal, spiteful, or masochistic. Their behavior is forward-looking, and it is also consistent over time. In particular, they try as best they can to anticipate the uncertain consequences of their actions. Forward-looking behavior, however, may still be rooted in the past, for the past can exert a long shadow on attitudes and values. Actions are constrained by income, time, imperfect memory and calculating capacities, and other limited resources, and also by the available opportunities in the economy and elsewhere. These opportunities are largely determined by the private and collective actions of other individuals and organizations.

Different constraints are decisive for different situations, but the most fundamental constraint is limited time. Economic and medical progress have greatly increased length of life, but not the physical flow of time itself, which always restricts everyone to twenty-four hours per day. So while goods and services have expended enormously in rich countries, the total time available to consume has not. Thus, wants remain unsatisfied in rich countries as well as in poor ones. For while the growing abundance of goods may reduce the value of additional goods, time becomes more valuable as goods become more abundant. Utility maximization is of no relevance in a Utopia where everyone’s needs
are fully satisfied, but the constant flow of time makes such a Utopia impossible.\”

Becker\’s Nobel lecture goes on to review his work in some key areas: discrimination, crime and punishment, formation of human capital, and structure of families. But here, I\’d point out a different implication of Becker\’s view.

There is a long-standing prediction, traceable at least as far back as John Stuart Mill\’s Principles of Political Economy in 1848  (for example, here), which looks forward to the end of scarcity. The argument is that someday–perhaps not too far into the future–there will be \”enough\” economic growth. When that time arrives, people will able to work less or not at all, while enjoying a sufficiency of the material goods that what want along with the time to pursue higher goals. When this time arrives, Mill wrote: \”There would be as much scope as ever for all kinds of mental culture, and moral and social progress; as much room for improving the Art of Living, and much more likelihood of its being improved, when minds ceased to be engrossed by the art of getting on. Even the industrial arts might be as earnestly and as successfully cultivated, with this sole difference, that instead of serving no purpose but the increase of wealth, industrial improvements would produce their legitimate effect, that of abridging labour.\”

When Becker points out that that the starting point for economic analysis is scarcity, that the most fundamental embodiment of scarcity is the limits of time, and that time becomes more valuable as per capita incomes rise, his argument implies that economic growth will never render the economic analysis of tradeoffs obsolete.

Big Data in Political Campaigns

How does the collection and use of big data work in political campaigns? David W. Nickerson and Todd Rogers pull back the curtain and offer a glimpse of what\’s been  happening in \”Political Campaigns and Big Data,\” which appears in the Spring 2014 issue of the Journal of Economic Perspectives.  Nickerson is a Notre Dame professor of political science who was \”`Director of Experiments\’ in the Analytics Department in the 2012 re-election campaign of President  Obama.\”  Rogers is a professor of public policy at Harvard\’s Kennedy school who \”co-founded the Analyst Institute, which uses field experiments and behavioral science insights to develop best practices in progressive political communications.\” They write:

Over the past six years, campaigns have become increasingly reliant on analyzing large and detailed datasets to create the necessary predictions. While the adoption of these new analytic methods has not radically transformed how campaigns operate, the improved efficiency gives data-savvy campaigns a competitive advantage. This has led the political parties to engage in an arms race to leverage ever-growing volumes of data to create votes. This paper describes the utility and evolution of data in political campaigns. The techniques used as recently as a decade or two ago by political campaigns to predict the tendencies of citizens appear extremely rudimentary by current standards.

Like all articles in JEP back to the first issue in 1987, it is freely available courtesy of the American Economic Association. (Full disclosure: I\’ve been managing editor of JEP back to that first issue in 1987.) Here are some points from their essay that jumped out at me.

The starting point for gathering data on potential voters are the publicly available files of official voters maintained in each state. As Nickerson and Rogers write: \”The official voter file contains
a wide range of information. In addition to personal information such as date of birth and gender, which are often valuable in developing predictive scores, voter files also contain contact information such as address and phone.\” In addition, while the files of course don\’t record who anyone voted for, they do show whether people voted, and how they voted–say, on Election Day, or using some form of early or absentee voting.

This data can then be merged with data from other sources. Census data is available for the average of a voting precinct, showing \”the average household income, average level of education, average number of children per household, and ethnic distribution\” for that precinct.

Additional data can be purchased from commercial firms. Nickerson and Rogers report that the most cost-effective data to purchase is updated phone numbers (because the phone numbers in the state voter registration files are often outdated after a few years) as well as data about \”estimated years of education, home ownership status, and mortgage information.\” Other information, while available, isn\’t cost-effective to buy. They write: \” In contrast, information on magazine subscriptions, car purchases, and other consumer tastes are relatively expensive to purchase from vendors, and also tend to be available for very few individuals. Given this limited coverage, this data tends not to be useful in constructing predictive scores for the entire population—and so campaigns generally avoid or
limit purchases of this kind of consumer data.\”

Finally, a major source of voter information is provided by voters themselves when they sign up at a candidate\’s website or party website. Not only do people provide information directly, but the campaign can also keep track of what sorts of topics or messages cause people to respond by clicking on a link or donating money, so much can be learned about people in that way.

These sources of information have some interesting implications. Campaigns know more about those who vote, and who are politically active, than about those who don\’t vote regularly or who are not politically active. Campaigns also tend to know more about their own supporters.  Nickerson and Rogers write: \”To the extent that predictive scores are useful and reveal true unobserved characteristics about citizens, it means that multiple organizations will produce predictive scores that recommend targeting the same sets of citizens. For example, some citizens might find themselves
contacted many times, while other citizens—like those with low turnout behavior scores in 2012—might be ignored by nearly every campaign.\”

After collecting and collating and coordinating all this data, the question is how to use it. Nickerson and Rogers point out that focusing on those who are already very likely to vote for you, or focusing on those who are already very likely to vote against you, tends to be a waste of money. Thus, one way that data can make a campaign more cost-effective is that it can minimize spending money on those who are unpersuadeable or who are already persuaded. This also reduces the risk of \”backlash,\” in which attempts to encourage voting for your candidate revs up voters for the other side.

Another possible advantage is that campaigns can run small-scale experiments  about what messages or actions are likely to cause a certain slice of voters to take an action–clicking on a link, volunteering time, putting up a sign, giving money–that is likely to be correlated with voting for the candidate later on. When small-scale experiments have shown what steps are likely to be effective, then the approach can be used at larger scale. How effective can such steps be? They write: \”Suppose a campaign’s persuasive communications has an average treatment effect of 2 percentage points—a number
on the high end of persuasion effects observed in high-expense campaigns: that is, if half of citizens who vote already planned to vote for the candidate, 52 percent would support the candidate after the persuasive communication.\”

Nickerson and Rogers point out in their conclusion that while using big data to drive campaigning, in a very real way, makes traditional boots-on-the-ground campaigning more important than ever. After all, the bottom line of the campaign is still to push for more of your voters to turn out. Big data can help a campaign allocate resources more cost-effectively, but campaign still needs to do the actual work.

\”The improved capability to target individual voters offers campaigns an opportunity to concentrate their resources where they will be most effective. This power, however, has not radically transformed the nature of campaign work. One could argue that the growing impact of data analytics in campaigns has amplified the importance of traditional campaign work. . . . Professional phone interviews are still used for message development and tracking, but they are also essential for developing predictive scores of candidate support and measuring changes in voter preferences in randomized experiments. Similarly, better targeting has made grassroots campaign tactics more efficient and therefore more cost competitive with mass communication forms of outreach. Volunteers still need to persuade skeptical neighbors, but they are now better able to focus on persuadable neighbors and use messages more likely to resonate. This leads to higher-quality interactions and (potentially) a more pleasant volunteer experience. So while savvy campaigns will harness the power of predictive scores, the scores will only help the campaigns that were already effective.\”

Work Philosophy from Gabriel García Márquez

I\’m often at least a few beats behind the tune on news that doesn\’t involve economics or policy, so I just heard a few days ago that Gabriel García Márquez who won the Nobel Prize in Literature in 1982 for One Hundred Years of Solitude and other works, died on April 17. I could see the genius in his work, but it was never among my favorites: the magic in his \”magic realism\” felt to me a little too contrived and mannered. But I was reading in English translation, not in Spanish, and what do I know about literature, anyway?

I do have a quotation from Marquez up on my office door that conveys a home truth about my own work life. It\’s from an interview with him that was published in the Boston Review (March-April 1983, pp. 26-27), and later reprinted in the 2006 collection Conversations with Gabriel Garcia Marquez, edited by Gene H. Bell-Villada (p. 137). He was asked about how he felt about One Hundred Years of Solitude being used as a required reading in college courses and cited by academics. Here\’s part of his answer:

\”On another occasion a sociologist from Austin, Texas, came to see me because he\’d grown dissatisfied with his methods, found them arid, insufficient. So he asked me what my own method was. I told him I didn\’t have a method. All I do is read a lot, think a lot, and rewrite constantly. It\’s not a scientific thing.\”

I\’m the managing editor of an academic economics journal, and an occasional lecturer and writer. That ethic might serve as a useful motto for editors everywhere.

Farewell to Notes

When the first issue of the American Economic Review, which would become the preeminent research journal in academic economics, was published back in 1911, it devoted 13 pages to \”Notes\”–that is, news about the profession of economics. At a time when the number of academic economists was much smaller, and methods of broad-based communication were much slower, the \”Notes\” included mentions of conferences that had already happened, books that were soon to be published, contributions of historical papers to libraries, even the sabbatical plans for some prominent economists. When I took the job as managing editor of the Journal of Economic Perspectives in 1987, we inherited the \”Notes\” from the AER. But now, after a run of 103 years, the rise of the web means that the time has come to stop publishing of conferences announcements, calls for papers, awards, and the like in a quarterly journal–or indeed on paper at all.  

In the just-released Spring 2014 issue of JEP, I commemorated the occasion with a  \”Farewell to Notes.\” Here are the opening and closing paragraphs:

The great composer Johannes Brahms once remarked: “It is not difficult to compose; but it is incredibly difficult to let the superfluous notes drop under the table” (as quoted in Musgrave and Pascall 1987, p. 138). Here at the Journal of Economic Perspectives, the challenges of composing each issue remain, but the \”Notes” have become superfluous, at least in their paper version.

The “Notes,” as those who lurk in these back pages of JEP know well, announce forthcoming conferences, calls for papers, awards, and the like. However, the Internet has made it obsolete to deliver such information on paper in a quarterly journal. … But as we say farewell to the print version of the “Notes,” a moment of remembrance seems appropriate. The first issue of the American Economic Review, published in 1911, found it worthwhile to devote 13 out of 219 total pages to “Notes.” …

Admittedly, the ending of the “Notes” section as printed within the covers of the Journal of Economic Perspectives doesn’t rank with some of the other great endings, like the revelation of what Citizen Kane meant by “Rosebud”; or “Forget it, Jake, it’s Chinatown”; or “Oh, Auntie Em, there’s no place like home!” But in its own small way, the end of the paper version of the “Notes” after its run of 103 years is one more sign of the remarkable changes in information and communication technology that surround us—and thus worth remarking.

Spring 2014 Journal of Economic Perspectives

The Spring 2014 issue of the Journal of Economic Perspectives is now freely available on-line, courtesy of the publisher, the American Economic Association. Indeed, not only this issue but all previous issues back to 1987 are available. (Full disclosure: I\’ve been the Managing Editor since the journal started, so this issue is #108 for me.) I\’ll probably blog about some of these articles in the next week or two. But for now, I\’ll first list the table of contents, and then below will provide abstracts of articles and weblinks.

Symposium: Big Data

\”Big Data: New Tricks for Econometrics,\”  by Hal R. Varian
\”High-Dimensional Methods and Inference on Structural and Treatment Effects,\” by Alexandre Belloni, Victor Chernozhukov and Christian Hansen
\”Political Campaigns and Big Data,\” by David W. Nickerson and Todd Rogers
\”Privacy and Data-Based Research,\” by Ori Heffetz and Katrina Ligett

Symposium: Global Supply Chains

\”Slicing Up Global Value Chains,\” by  Marcel P. Timmer, Abdul Azeez Erumban, Bart Los, Robert Stehrer and Gaaitzen J. de Vries
\”Five Facts about Value-Added Exports and Implications for Macroeconomics and Trade Research,\” by Robert C. Johnson

Articles and Features

\”Raj Chetty: 2013 Clark Medal Recipient,\” by Martin Feldstein
\”Fluctuations in Uncertainty,\” by Nicholas Bloom
\”The Market for Blood,\” by Robert Slonim, Carmen Wang and Ellen Garbarino
\”Retrospectives: The Cyclical Behavior of Labor Productivity and the Emergence of the Labor Hoarding Concept,\” by Jeff E. Biddle
\”Recommendations for Further Reading,\” by Timothy Taylor
\”Correction and Update: The Economic Effects of Climate Change,\” by Richard S. J. Tol
\”Farewell to Notes,\” by Timothy Taylor

________________________

And here are the abstracts and links:

Symposium: Big Data


\”Big Data: New Tricks for Econometrics,\”  by Hal R. Varian

Computers are now involved in many economic transactions and can capture data associated with these transactions, which can then be manipulated and analyzed. Conventional statistical and econometric techniques such as regression often work well, but there are issues unique to big datasets that may require different tools. First, the sheer size of the data involved may require more powerful data manipulation tools. Second, we may have more potential predictors than appropriate for estimation, so we need to do some kind of variable selection. Third, large datasets may allow for more flexible relationships than simple linear models. Machine learning techniques such as decision trees, support vector machines, neural nets, deep learning, and so on may allow for more effective ways to model complex relationships. In this essay, I will describe a few of thes e tools for manipulating and analyzing big data. I believe that these methods have a lot to offer and should be more widely known and used by economists.
Full-Text Access | Supplementary Materials

\”High-Dimensional Methods and Inference on Structural and Treatment Effects,\” by Alexandre Belloni, Victor Chernozhukov and Christian Hansen

Data with a large number of variables relative to the sample size—\”high-dimensional data\”—are readily available and increasingly common in empirical economics. High-dimensional data arise through a combination of two phenomena. First, the data may be inherently high dimensional in that many different characteristics per observation are available. For example, the US Census collects information on hundreds of individual characteristics and scanner datasets record transaction-level data for households across a wide range of products. Second, even when the number of available variables is relatively small, researchers rarely know the exact functional form with which the small number of variables enter the model of interest. Researchers are thus faced with a large set of potential variables formed by different ways of interacting and transforming the underlying variables. This paper provides an overview of how innovations in \”data mining\” can be adapted and modified to provide high-quality inference about model parameters. Note that we use the term \”data mining\” in a modern sense which denotes a principled search for \”true\” predictive power that guards against false discovery and overfitting, does not erroneously equate in-sample fit to out-of-sample predictive ability, and accurately accounts for using the same data to examine many different hypotheses or models.
Full-Text Access | Supplementary Materials

\”Political Campaigns and Big Data,\”  by David W. Nickerson and Todd Rogers

Modern campaigns develop databases of detailed information about citizens to inform electoral strategy and to guide tactical efforts. Despite sensational reports about the value of individual consumer data, the most valuable information campaigns acquire comes from the behaviors and direct responses provided by citizens themselves. Campaign data analysts develop models using this information to produce individual-level predictions about citizens\’ likelihoods of performing certain political behaviors, of supporting candidates and issues, and of changing their support conditional on being targeted with specific campaign interventions. The use of these predictive scores has increased dramatically since 2004, and their use could yield sizable gains to campaigns that harness them. At the same time, their widespread use effectively creates a coordination game with incomplete information between allied organizations. As such, organizations would benefit from partitioning the electorate to not duplicate efforts, but legal and political constraints preclude that possibility.
Full-Text Access | Supplementary Materials

\”Privacy and Data-Based Research,\” by Ori Heffetz and Katrina Ligett

What can we, as users of microdata, formally guarantee to the individuals (or firms) in our dataset, regarding their privacy? We retell a few stories, well-known in data-privacy circles, of failed anonymization attempts in publicly released datasets. We then provide a mostly informal introduction to several ideas from the literature on differential privacy, an active literature in computer science that studies formal approaches to preserving the privacy of individuals in statistical databases. We apply some of its insights to situations routinely faced by applied economists, emphasizing big-data contexts.
Full-Text Access | Supplementary Materials

Symposium: Global Supply Chains

\”Slicing Up Global Value Chains,\” by Marcel P. Timmer, Abdul Azeez Erumban, Bart Los, Robert Stehrer and Gaaitzen J. de Vries

In this paper, we \”slice up the global value chain\” using a decomposition technique that has recently become feasible due to the development of the World Input-Output Database. We trace the value added by all labor and capital that is directly and indirectly needed for the production of final manufacturing goods. The production systems of these goods are highly prone to international fragmentation as many stages can be undertaken in any country with little variation in quality. We seek to establish a series of facts concerning the global fragmentation of production that can serve as a starting point for future analysis. We describe four major trends. First, international fragmentation, as measured by the foreign value-added content of production, has rapidly increased since the early 1 990s. Second, in most global value chains there is a strong shift towards value being added by capital and high-skilled labor, and away from less-skilled labor. Third, within global value chains, advanced nations increasingly specialize in activities carried out by high-skilled workers. Fourth, emerging economies surprisingly specialize in capital-intensive activities.
Full-Text Access | Supplementary Materials

\”Five Facts about Value-Added Exports and Implications for Macroeconomics and Trade Research,\” by Robert C. Johnson

Due to the rise of global supply chains, gross exports do not accurately measure the amount of value added exchanged between countries. I highlight five facts about differences between gross and value-added exports. These differences are large and growing over time, currently around 25 percent, and manufacturing trade looks more important, relative to services, in gross than value-added terms. These differences are also heterogenous across countries and bilateral partners, and changing unevenly across countries and partners over time. Taking these differences into account enables researchers to obtain better quantitative answers to important macroeconomic and trade questions. I discuss how the facts inform analysis of the transmission of shocks across countries; the mechanics of trade balance adjustments; the impact of frictions on trade; the role of endowments and comparative advantage; and trade policy.
Full-Text Access | Supplementary Materials

Articles and Features

\”Raj Chetty: 2013 Clark Medal Recipient,\” by Martin Feldstein

Raj Chetty is eminently deserving of being awarded the John Bates Clark Medal at the age of 33. His research has transformed the field of public economics. His work is motivated by important public policy issues in the fields of taxation, social insurance, and public spending for education. He approaches his subjects with a creative redefinition of the problems that he studies, and his empirical methods often draw on experimental evidence or unprecedentedly large sets of integrated data. While his work is founded on basic microeconomics, he modifies this framework to take into account behavioral and institutional considerations. Chetty is a prolific scholar. It is difficult to summarize all of Chetty\’s research or even to capture the details of his most significant papers. I have therefore chosen a selection of Chetty\’s important papers dealing with taxation, social insurance, and education that contributed to his selection as the winner of the John Bates Clark Medal.
Full-Text Access | Supplementary Materials

\”Fluctuations in Uncertainty,\” by Nicholas Bloom

Uncertainty is an amorphous concept. It reflects uncertainty in the minds of consumers, managers, and policymakers about possible futures. It is also a broad concept, including uncertainty over the path of macro phenomena like GDP growth, micro phenomena like the growth rate of firms, and noneconomic events like war and climate change. In this essay, I address four questions about uncertainty. First, what are some facts and patterns about economic uncertainty? Both macro and micro uncertainty appear to rise sharply in recessions and fall in booms. Uncertainty also varies heavily across countries—developing countries appear to have about one-third more macro uncertainty than developed countries. Second, why does uncertainty vary during business cycles? Third, do fluctuations in uncertainty affect behavior? Fourth, has higher uncertainty worsened the Great Rec ession and slowed the recovery? Much of this discussion is based on research on uncertainty from the last five years, reflecting the recent growth of the literature.
Full-Text Access | Supplementary Materials

\”The Market for Blood,\” by Robert Slonim, Carmen Wang and Ellen Garbarino

Donating blood, \”the gift of life,\” is among the noblest activities and it is performed worldwide nearly 100 million times annually. The economic perspective presented here shows how the gift of life, albeit noble and often motivated by altruism, is heavily influenced by standard economic forces including supply and demand, economies of scale, and moral hazard. These forces, shaped by technological advances, have driven the evolution of blood donation markets from thin one-to-one \”marriage markets\” in which each recipient needed a personal blood donor, to thick, impersonalized, diffuse markets. Today, imbalances between aggregate supply and demand are a major challenge in blood markets, including excess supply after disasters and insufficient supply at other times. These imbalances are not unexpected given that the blood market operate s without market prices and with limited storage length (about six weeks) for whole blood. Yet shifting to a system of paying blood donors seems a practical impossibility given attitudes toward paying blood donors and concerns that a paid system could compromise blood safety. Nonetheless, we believe that an economic perspective offers promising directions to increase supply and improve the supply and demand balance even in the presence of volunteer supply and with the absence of market prices.
Full-Text Access | Supplementary Materials

\”Retrospectives: The Cyclical Behavior of Labor Productivity and the Emergence of the Labor Hoarding Concept,\” by Jeff E. Biddle

The concept of \”labor hoarding,\” at least in its modern form, was first fully articulated in the early 1960s by Arthur Okun (1963). By the end of the 20th century, the concept of \”labor hoarding\” had become an accepted part of economists\’ explanations of the workings of labor markets and of the relationship between labor productivity and economic fluctuations. The emergence of this concept involved the conjunction of three key elements: the fact that measured labor productivity was found to be procyclical, rising during expansions and falling during contractions; a perceived contradiction with the theory of the neoclassical firm in a competitive economy; and a possible explanation based on optimizing behavior on the part of firms. Each of these three elements—fact, contradiction , and explanation—has a history of its own, dating back to at least the opening decades of the twentieth century. Telling the story of the emergence of the modern labor hoarding concept requires recounting these three histories, histories that involve the work of economists motivated by diverse purposes and often not mainly, if at all, concerned with the questions that the labor hoarding concept was ultimately used to address. As a final twist to the story, the long-standing positive relationship between labor productivity and output in the US economy began to disappear in the late 1980s; and during the Great Recession, labor productivity rose while the economy contracted.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

\”Correction and Update: The Economic Effects of Climate Change,\” by Richard S. J. Tol

Gremlins intervened in the preparation of my paper \”The Economic Effects of Climate Change\” published in the Spring 2009 issue of this journal. In Table 1 of that paper, titled \”Estimates of the Welfare Impact of Climate Change,\” minus signs were dropped from the two impact estimates, one by Plambeck and Hope (1996) and one by Hope (2006). In Figure 1 of that paper, titled \”Fourteen Estimates of the Global Economic Impact of Climate Change,\” and in the various analyses that support that figure, the minus sign was dropped from only one of the two estimates. The corresponding Table 1 and Figure 1 presented here correct these errors. Figure 2 titled,\”Twenty-One Estimates of the Global Economic Impact of Climate Change\” adds two overlooked estimates from before the time of the original 2009 paper and five more recent ones.
Full-Text Access | Supplementary Materials

\”Farewell to Notes,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

Highway Patrol Traffic Enforcement

When trying to estimate the extent to which law enforcement efforts reduce crime, there\’s a standard problem of thinking through cause and effect.  If a neighborhood with a lot of crime gets more police, then a higher number of police will be correlated with higher crime rates–but the crime caused the higher police presence, not the other way around. There are many more traffic police out on New Year\’s Eve, and also many more drunk drivers, but that doesn\’t mean that the added police caused additional drunkenness. Ideally, researchers would find an experiment where the police presence was changed randomly, in a way that didn\’t reflect crime levels, in some places but not others, so that the effect of police could be studied. But random variations in the assignment of police officers, without looking at crime levels, is not a popular policy for politicians to support.

But sometimes an event occurs that offers what researchers call a \”natural experiment\”–that is, a situation where a variation in police presence occurred for reasons that had nothing to do with crime levels. Gregory DeAngelo and Benjamin Hansen take advantage of such an opportunity to look at how highway traffic patrols, looking for speeders and reckless drivers, affect fatalities in their paper, \”Life and Death in the Fast Lane: Police Enforcement and Traffic Fatalities.\” It appears in the most recent issue of the American Economic Journal: Economic Policy (6:2, pp. 231–257). The AEJ: Policy is not freely available on-line, but many readers will have access through a library subscription. The

Their bottom line: The highway patrol saves lives at a cost of about $309,000 per life. A standard metric among economists, the \”value of a statistical life,\” says that in the United States it is worth taking regulatory or law enforcement actions that reduce the risks of death when the costs of such actions are less than about $9 million per life.

Their story starts in 1997, when the state of Oregon passed a ballot proposition that placed sharp limits on property taxes. The state\’s public finances suffered, and after a ballot proposition to raise some additional tax revenues failed in 2003,  117 out of 354 full-time roadway troopers were laid off. The number of traffic citations given for speeding or reckless driving on Oregon highways fell by 25 percent.  At about the same time, the speed of Oregon drivers crept up. One measure comes from automated speed counters. Another comes from comparing the average speed someone was travelling when they received a speeding ticket to the posted speed.

The number of motor vehicle deaths in Oregon rose. Here\’s a graph showing the number of incapacitating injuries or deaths in Oregon, using monthly data for the three years before and after the reduction in highway troopers. The data varies through the year, with more fatalities and accidents in the summer driving season. But you can see that the peaks get higher when the number of police is reduced.

There are various ways to estimate the effects of reducing the highway patrol in Oregon. One can look before and after the change. One can do a comparison with trends in neighboring states, like Washington and Idaho. The authors do these, and also They also generate a \”synthetic\” control group of states which show similar patterns to Oregon before change; in this case, the synthetic control group turns out to be Idaho, Washington, Nevada, and West Virginia, with differing weights on these states. The exact estimates vary, but looking over a longer period of time, they argue:

\”An analysis of the reduction in state police in Oregon since 1979 suggests that there
would have been 2,167 fewer deaths over the 1979–2005 time span if the state police
had maintained their original 1979 staffing levels. Moreover, if the police force were
allowed to grow at the same rate as the increases in VMT [vehicle-miles travelled] (which would amount to a 360 percent increase over actual staffing levels in 2005), then there would have been 5,031 fewer traffic fatalities over 1979–2005.\”

Of course, one can debate how the Oregon numbers would extrapolate to other states, or whether the benefits of adding more highway patrol officers would be symmetric with the costs of reducing the number. But overall, the US has about 30,000 deaths from motor vehicle accidents each year, plus 1.5 million injuries. A few years ago, the Centers for Disease Control estimated that motor vehicle fatalities cost about $41 billion of medical care and lost work. Of course, this estimate is an underestimate of the total social costs, because it doesn\’t place any value on the actual lives lost, nor on the suffering and other costs of motor vehicle injuries (as opposed to deaths). It seems very plausible that increasing the numbers of the highway patrol would be a cost-effective way of saving lives.

100 Years of Consumer Price Index Data

The Consumer Price Index was officially first published in 1921, but at that time, price data was also published going back to 1913. Thus, there\’s a sense in which a very small group of geeks and wonks are now celebrating the first 100 years of price data. Those who wish to join the our very small party might start by checking the Monthly Labor Review, published by the U.S. Bureau of Labor Statistics, has two articles in the April 2014 issue looking back in time: \”One hundred years of price change: the Consumer Price Index and the American inflation experience,\” and \”The first hundred years of the Consumer Price Index: a methodological and political history.\”

Here, I\’ll just mention some points that caught my eye from these two articles.  But as a starting point, here\’s a figure showing the path of annual rates of inflation as measured by the Consumer Price Index during the last century, taken from the third edition of my Principles of Economics textbook that was published earlier this year. You can see the three big inflations: those intertwined with World War I or World War II, and the high peacetime inflation of the 1970s. You can also see the episodes of deflation after World Wars I and II, during the Great Depression, and the recent blip of deflation in the aftermath of the Great Recession.

1) Reading the history is an ongoing reminder that price changes have been politically sensitive for a long time, and indeed, that price controls have been fairly common in U.S. experience. As BLS writes: \”[A]ctivist policies aimed at directly controlling prices were a regular feature of the nation’s economy until the last few decades.\” When President Richard Nixon instituted wage and price controls in 1971, there had been price controls \”during World War I, the 1930s, World War II, and the Korean war.\” There had also been episodes, like the New Deal legislation after Great Depression, where the government had attempted to institute price floors to block deflation. Add in the price controls that affected certain sectors for much of the 20th century–airfares, phone service, interest rates paid by banks, public utilities, and others–and the notion that the U.S. economy used to be be a wide-open free market is open to some qualification. 

2) There have been various efforts over time to hold down inflation by what became known as \”jaw-boning\” under President Gerald Ford called \”jaw-boning\”–that is, telling firms publicly that they shouldn\’t raise prices, and making a fuss when they did. For example, there were \”fair-price committees\” established after World War I to monitor if sellers exceeded price guidelines, and thre were also attempts to restrain inflation voluntarily after the Korean war and under President Carter int the late 1970s. But perhaps the best-remembered episode today is President Ford\’s Whip Inflation Now (WIN) campaign in 1974:

President Ford inherited the difficult inflation situation. In late 1974, he declared inflation to be “public enemy number one.” He solicited inflation-fighting ideas from the public, and his signature “Whip Inflation Now” (WIN) campaign was started. Citizens could receive their WIN button by signing this pledge:

I enlist as an Inflation Fighter and Energy Saver for the duration. I will do the very best I can for America.

An October 1974 newspaper reprints the form containing the pledge. Tellingly, the story next to the form asserts that relief from food prices was unlikely before 1976, while another account details the administration’s efforts to advance price-fixing legislation.46 Buttons were hardly the only WIN product: there were WIN duffel bags (as shown below), WIN earrings, and even a WIN football. 

3) Whatever number for price increases was produced by the Bureau of Labor Statistics was often used in labor disputes. Thus, such estimates were always controversial. As BLS writes about labor disputes back in the 1940s, \”[M]any were confusing the additional expense of attaining a higher standard of living for an increase in the cost of a fixed standard of living.\” I\’d say that confusion persists today.

4) There has long been a controversy over whether the CPI just looked at prices, or whether it was seeking to capture the broader idea of a \”cost of living.\” In 1940, the Cost of Living Division in the Bureau of Labor Statistics called its index, \”Cost of Living of Wage Earners and Lower-Salaried Workers in Large Cities.\” But there were numerous complaints that this measure only covered price changes, and didn\’t capture, for example, the fact that during World War II many households had higher expenses from the need to move between homes, pay higher income taxes, and other factors. Ih 1945, the BLS banished the \”cost of living\” term, and instead renamed the index as the Consumer’s Price Index for Moderate Income Families in Large Cities.”

5) However, soon after adopting the \”price index\” terminology, economists began to argue in  congressional testimony in the 1950s and the \”Stigler Commission\” report in 1961 that the conventional process of looking at a basket of goods purchased by consumers and seeing how the total cost of buying that basket of goods over time changed was, perhaps counterintuitively, not accurately measuring how consumers were affected by price changes. The BLS quotes the 1961 Stigler report:

\”It is often stated that the Consumer Price Index measures the price changes of a fixed standard of living based on a fixed market basket of goods and services. In a society where there are no new products, no changes in the quality of existing products, no changes in consumer tastes, and no changes in relative prices of goods and services, it is indeed true that the price of a fixed market basket of goods and services will reflect the cost of maintaining (for an individual household or an average family) a constant level of utility. But in the presence of the introduction of new products, and changes in product quality, consumer tastes, and relative prices, it is no longer true that the rigidly fixed market basket approach yields a realistic measure of how consumers are affected by prices. If consumers rearrange their budgets to avoid the purchase of those products whose prices have risen and simultaneously obtain access to equally desirable new, low-priced products, it is quite possible that the cost of maintaining a fixed standard of living has fallen despite the fact that the price of a fixed market basket has risen.

As the BLS notes: \”In response to the review’s findings, BLS would abandon the constant-goods conception of a price index and adopt the constant-utility framework as the guiding theoretical perspective in revising future indexes.\” In other words, the BLS immediately returns to a broader cost-of-living concept, but looked only at how the cost of living was affected by changes in prices and in the types of goods available.

For those who would like a primer on issues surrounding the measurement of inflation in an economy of substitution as prices change, along new and evolving products in the last couple of decades, the Journal of Economic Perspectives has run a couple of symposia on the subject. The Winter 1998 issue has an article from the members of the Boskin Commission, which had published a report on ways to reform the Consumer Price Index, along with six comments. In the Winter 2003 issue, the JEP included another symposium on the CPI, this time leading off with a paper by one author of a National Academy of Sciences report on the topic, with a couple of comments. (Full disclosure: I\’ve worked as Managing Editor of JEP since 1986.)

6) I had not known that what was then called the Bureau of Labor did a fairly detailed study of family expenditures and retail prices well from 1888 to 1890.

\”At the time, the federal government was accumulating large budget surpluses, and both Democrats and Republicans identified tariff policy as the preferred tool for reducing the surpluses. … Congressional Republicans eventually won the debate and passed the Tariff Act of 1890, commonly called the McKinley Tariff, which increased the average tariff level from 38 percent to 49.5 percent. Concerned with the effect that this new tariff law would have on the cost of production in key industrial sectors, Congress requested the Bureau of Labor to conduct studies on wages, prices, and hours of work in the iron and steel, coal, textile, and glass industries. …From 1888 to 1890, expenditure data were collected from 8,544 families associated with the aforesaid industries. Retail prices were collected on “215 commodities, including 67 food items, in 70 localities,” from May 1889 to September 1891.\”

7) I had also not known that there was a detailed study on expenditures and prices around 1903.

\”After successfully fulfilling special requests for smaller, industry-specific statistical studies for the Congress and the President, the Bureau endeavored to conduct a comprehensive study of the condition of working families throughout the country. A survey of family expenditures from 1901 to 1903 was the first step in constructing a comprehensive index of retail prices. Bureau agents surveyed 25,440 families that were headed by a wage earner or salaried worker earning no more than $1,200 annually in major industrial centers in 33 states; with inclusivity in mind, the Bureau included African American and foreign-born families in its survey. Agents collected 1 year’s data on food, rent, insurance, taxes, books and newspapers, and other personal expenditures. Using data on the income and expenditures of 2,500 families, the Bureau derived expenditure weights, particularly for principal food items, from this study. The second step in the construction of the retail price index was the collection of retail prices on the goods reported in the expenditure survey. The Bureau collected these data from 800 retail merchants in localities which were representative of the data collected in that survey. The prices collected spanned the years from 1890 to 1903. The 3-year study culminated in the publication of a price index called the Relative Retail Price of Food, Weighted According to the Average Family Consumption, 1890 to 1902 (base of 1890–1899), the first weighted retail price index calculated and published by the Bureau of Labor. The index included monthly quotations for the average prices and relative prices (averages weighted by consumption) of 30 principal food items. The Bureau expanded this survey to include 1,000 retail outlets in 40 states, and the index ran through 1907 before ceasing publication.\”

8) No single price index is ever going to be perfect for all uses So before using a price index, think about how it was constructed and if it\’s the right tool for your purposes. I was delighted to see that one of the BLS articles ended by quoting a 1998 article from the Journal of Economic Perspectives on this point, by Katharine G. Abraham, John S. Greenlees, and Brent R. Moulton. They wrote: 

\”It is, in fact, commonplace to observe that there is no single best measure of inflation. It is evident that the expanding number of users of the CPI have objectives and priorities that sometimes can come into conflict. The BLS response to this situation has been to develop a “family of indexes” approach, including experimental measures designed to provide information that furthers assessment of CPI measurement problems, or to focus on certain population subgroups, or to answer different questions from those answered by the CPI. All of these measures are carefully developed but have their own limitations. Those who use the data we produce should recognize these limitations and exercise judgment accordingly concerning whether and how the data ought to be used.\”