________________________
Symposium: Recent Ideas in Econometrics\”The State of Applied Econometrics: Causality and Policy Evaluation,\” by Susan Athey and Guido W. Imbens
Full-Text Access | Supplementary Materials
\”The Use of Structural Models in Econometrics,\” by Hamish Low and Costas Meghir
Full-Text Access | Supplementary Materials
\”Twenty Years of Time Series Econometrics in Ten Pictures,\” by James H. Stock and Mark W. Watson
Full-Text Access | Supplementary Materials
\”Machine Learning: An Applied Econometric Approach,\” by Sendhil Mullainathan and Jann Spiess
Machines are increasingly doing \”intelligent\” things. Face recognition algorithms use a large dataset of photos labeled as having a face or not to estimate a function that predicts the presence y of a face from pixels x. This similarity to econometrics raises questions: How do these new empirical tools fit with what we know? As empirical economists, how can we use them? We present a way of thinking about machine learning that gives it its own place in the econometric toolbox. Machine learning not only provides new tools, it solves a different problem. Specifically, machine learning revolves around the problem of prediction, while many economic applications revolve around parameter estimation. So applying machine learning to economics requires finding relevant tasks. Machine learning algorithms are now technically easy to use: you can download convenient packages in R or Python. This also raises the risk that the algorithms are applied naively or their output is misinterpreted. We hope to make them conceptually easier to use by providing a crisper understanding of how these algorithms work, where they excel, and where they can stumble—and thus where they can be most usefully applied.
Full-Text Access | Supplementary Materials
\”Identification and Asymptotic Approximations: Three Examples of Progress in Econometric Theory,\” by James L. Powell
In empirical economics, the size and quality of datasets and computational power has grown substantially, along with the size and complexity of the econometric models and the population parameters of interest. With more and better data, it is natural to expect to be able to answer more subtle questions about population relationships, and to pay more attention to the consequences of misspecification of the model for the empirical conclusions. Much of the recent work in econometrics has emphasized two themes: The first is the fragility of statistical identification. The other, related theme involves the way economists make large-sample approximations to the distributions of estimators and test statistics. I will discuss how these issues of identification and alternative asymptotic approximations have been studied in three research areas: analysis of linear endogenous regressor models with many and/or weak instruments; nonparametric models with endogenous regressors; and estimation of partially identified parameters. These areas offer good examples of the progress that has been made in econometrics.
Full-Text Access | Supplementary Materials
\”Undergraduate Econometrics Instruction: Through Our Classes, Darkly,\” by Joshua D. Angrist and Jörn-Steffen Pischke
\”The past half-century has seen economic research become increasingly empirical, while the nature of empirical economic research has also changed. In the 1960s and 1970s, an empirical economist\’s typical mission was to \”explain\” economic variables like wages or GDP growth. Applied econometrics has since evolved to prioritize the estimation of specific causal effects and empirical policy analysis over general models of outcome determination. Yet econometric instruction remains mostly abstract, focusing on the search for \”true models\” and technical concerns associated with classical regression assumptions. Questions of research design and causality still take a back seat in the classroom, in spite of having risen to the top of the modern empirical agenda. This essay traces the divergent development of econometric teaching and empirical practice, arguing for a pedagogical paradigm shift.\”
Full-Text Access | Supplementary Materials
Symposium: Are Measures of Economic Growth Biased?
\”Underestimating the Real Growth of GDP, Personal Income, and Productivity,\” by Martin Feldstein
Economists have long recognized that changes in the quality of existing goods and services, along with the introduction of new goods and services, can raise grave difficulties in measuring changes in the real output of the economy. But despite the attention to this subject in the professional literature, there remains insufficient understanding of just how imperfect the existing official estimates actually are. After studying the methods used by the US government statistical agencies as well as the extensive previous academic literature on this subject, I have concluded that, despite the various improvements to statistical methods that have been made through the years, the official data understate the changes of real output and productivity. The official measures provide at best a lower bound on the true real growth rate with no indication of the size of the underestimation. In this essay, I briefly review why national income should not be considered a measure of well-being; describe what government statisticians actually do in their attempt to measure improvements in the quality of goods and services; consider the problem of new products and the various attempts by economists to take new products into account in measuring overall price and output changes; and discuss how the mismeasurement of real output and of prices might be taken into account in considering various questions of economic policy.
Full-Text Access | Supplementary Materials
\”Challenges to Mismeasurement Explanations for the US Productivity Slowdown,\” by Chad Syverson
The United States has been experiencing a slowdown in measured labor productivity growth since 2004. A number of commentators and researchers have suggested that this slowdown is at least in part illusory because real output data have failed to capture the new and better products of the past decade. I conduct four disparate analyses, each of which offers empirical challenges to this \”mismeasurement hypothesis.\” First, the productivity slowdown has occurred in dozens of countries, and its size is unrelated to measures of the countries\’ consumption or production intensities of information and communication technologies (ICTs, the type of goods most often cited as sources of mismeasurement). Second, estimates from the existing research literature of the surplus created by internet-linked digital technologies fall far short of the $3 trillion or more of \”missing output\” resulting from the productivity growth slowdown. Third, if measurement problems were to account for even a modest share of this missing output, the properly measured output and productivity growth rates of industries that produce and service ICTs would have to have been multiples of their measured growth in the data. Fourth, while measured gross domestic income has been on average higher than measured gross domestic product since 2004—perhaps indicating workers are being paid to make products that are given away for free or at highly discounted prices—this trend actually began before the productivity slowdown and moreover reflects unusually high capital income rather than labor income (i.e., profits are unusually high). In combination, these complementary facets of evidence suggest that the reasonable prima facie case for the mismeasurement hypothesis faces real hurdles when confronted with the data.
Full-Text Access | Supplementary Materials
\”How Government Statistics Adjust for Potential Biases from Quality Change and New Goods in an Age of Digital Technologies: A View from the Trenches,\” by Erica L. Groshen, Brian C. Moyer, Ana M. Aizcorbe, Ralph Bradley and David M. Friedman
A key economic indicator is real output. To get this right, we need to measure accurately both the value of nominal GDP (done by Bureau of Economic Analaysis) and key price indexes (done mostly by Bureau of Labor Statisticcs). All of us have worked on these measurements while at the BLS and the BEA. In this article, we explore some of the thorny statistical and conceptual issues related to measuring a dynamic economy. An often-stated concern is that the national economic accounts miss some of the value of some goods and services arising from the growing digital economy. We agree that measurement problems related to quality changes and new goods have likely caused growth of real output and productivity to be understated. Nevertheless, these measurement issues are far from new, and, based on the magnitude and timing of recent changes, we conclude that it is unlikely that they can account for the pattern of slower growth in recent years. First we discuss how the Bureau of Labor Statistics currently adjusts price indexes to reduce the bias from quality changes and the introduction of new goods, along with some alternative methods that have been proposed. We then present estimates of the extent of remaining bias in real GDP growth that stem from potential biases in growth of consumption and investment. And we take a look at potential biases that could result from challenges in measuring nominal GDP, including those involving the digital economy. Finally, we review ongoing work at BLS and BEA to reduce potential biases and further improve measurement.
Full-Text Access | Supplementary Materials
Articles
\”Social Media and Fake News in the 2016 Election,\” by Hunt Allcott and Matthew Gentzkow
Following the 2016 US presidential election, many have expressed concern about the effects of false stories (\”fake news\”), circulated largely through social media. We discuss the economics of fake news and present new data on its consumption prior to the election. Drawing on web browsing data, archives of fact-checking websites, and results from a new online survey, we find: 1) social media was an important but not dominant source of election news, with 14 percent of Americans calling social media their \”most important\” source; 2) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times; 3) the average American adult saw on the order of one or perhaps several fake news stories in the months around the election, with just over half of those who recalled seeing them believing them; and 4) people are much more likely to believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks.
Full-Text Access | Supplementary Materials
Yuliy Sannikov is an extraordinary theorist who has developed methods that offer new insights in analyzing problems that had seemed well-studied and familiar: for example, decisions that might bring about cooperation and/or defection in a repeated-play prisoner\’s dilemma game, or that affect the balance of incentives and opportunism in a principal-agent relationship. His work has broken new ground in methodology, often through the application of stochastic calculus methods. The stochastic element means that his work naturally captures situations in which there is a random chance that monitoring, communication, or signaling between players is imperfect. Using calculus in the context of continuous-time games allows him to overcome tractability problems that had long hindered research in a number of areas. He has substantially altered the toolbox available for studying dynamic games. This essay offers an overview of Sannikov\’s research in several areas.
Full-Text Access | Supplementary Materials
\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials