Carl F. Christ, 1923-2017

Carl F. Christ (1923-2017) was first trained as a physicist at Colorado College and the University of Chicago. He later wrote: \”Upon graduation in 1943 I went to work in Chicago on the Manhattan Project, the atom bomb project. Information circulated freely in internal seminars for the professional staff, and we soon found out what we were working on. At the time I was living in Concord Co-op House, founded and mainly occupied by pacifist Quakers. After  much thought I decided that I was not a pacifist, and that I would prefer the U.S. rather than Germany to be the first to develop the bomb. (It was known that they were working on it too.).\” But after the war, he decided \”to look for a

social science which I could use my mathematics.\”  He went to the University of Chicago for a PhD in economics, and spent most of his professional career at Johns Hopkins University. A substantial number of students learned their introductory econometrics from Christ\’s textbook, Econometric Models and Methods, first published in 1966. Here\’s an obituary from Johns Hopkins, and here\’s one from the Baltimore Sun.

Back in Spring 1990, Christ wrote a nice biographical essay for The American Economist called \”A Philosophy of Life\” (pp. 33-39, available through JSTOR, and the source of the quotation above).  He gives a sense of academia as it used to be:

\”On the Lake Michigan beach in Michigan, where I spent summers as a child, and still do now, was an elderly gentleman in swim trunks, known to the children as Mr. Knight. I was quite surprised on entering graduate school at Chicago to find him, fully clothed, teaching economics. He had great skepticism of anyone who appeared to know all the answers. He told his classes, \”As soon as a person gets a theory, he\’s lost. …

\”I owe a great debt as well to Leonid Hurwicz, who was not even on my committee. I had presented my paper at the famous Cowles seminar (where only \”clarifying questions\” were allowed until the discussion period arrived, and where \”clarifying questions\” from Arrow, Hurwicz, and Modigliani began to fly as soon as the seminar began). I thought I was finished with my thesis, until Hurwicz invited me to come down to the University of Illinois (where he had just moved) because he had some suggestions for me. I stayed in his World War II prefab hut with him and his family for two days while he went over my thesis with a fine-tooth comb. When I left I was very depressed. But soon I realized that he had done me an enormous favor, and that my thesis was much improved as a result.\”

And here\’s a comment about what econometrics can teach and what economists can know:

\”I used to believe that it was possible to build and estimate an econometric model that would represent an invariant law of economic behavior, valid for many places and for long periods of time. I no longer believe this, because I have yet to see an econometric model that continues to describe new data with no change in its parameters. Instead, I believe that economic reality is so complex that the best we can expect of an econometric model is that it may approximately represent the relations among its variables for a limited place and time. Such an approximation may be very useful, and may permit us to make forecasts for short periods into the future. But until we have much more knowledge about human biology and its relation to economic, social, and political behavior I think we will not achieve econometric models that are invariant over wide reaches of space and time.

\”What then do economists really know? I think we know a great deal. (Remember that this knowledge must be only tentatively accepted.) Most of our knowledge is about equilibrium situations and how they change, rather than about the path followed by the economy on the way to equilibrium. We know some rather simple things that can be stated in nontechnical terms. For example, we know that as incomes rise, a smaller fraction of income is spent for agricultural and extractive products, and a larger fraction for processed goods and for services. We know that increases in a country\’s output per person require either more effort per person, more capital per person, or better productive techniques. We know that price ceilings create shortages, and price floors create unsold surpluses. We know that sustained rapid growth in the stock of money is accompanied by rapid inflation and vice versa. We know that government spending must be financed by some combination of taxation, revenue from sales of product, borrowing from private or foreign sources, issuing high powered money, or depleting stocks of government-held assets. We know that excise taxes, tariffs, and quotas reduce economic welfare in the sense that without them the same total resource pool could produce more satisfaction for some people at no cost to others. We know that permitting individuals to own property and engage in transactions with each other freely will benefit the participants and harm no one, provided that everyone is well informed and acts in his own interest, no one has monopoly power, and there are no external diseconomies such as pollution and no external economies such as increases in the value of my neighbor\’s real estate if I improve mine. (These are very large provisos, and in some situations they are not even approximately satisfied.) We know that a system of private property and free contract leads to an unequal distribution of income and wealth. We know that social or political attempts to equalize the distribution of income and wealth have perverse incentive effects, so that equalization has a cost in the form of a reduction of total output, and we know a good deal about which kinds of policies are the most perverse in this respect (quotas and price controls) and which are the least perverse (income and inhelitance taxes at moderate rates, and good public education and health programs). We know how to look for long term as well as short term effects, and for indirect and well as direct effects. We also know a great many technical theorems that require mathematics to state clearly and to prove. …

Perhaps the way to sum up this philosophy in the fewest possible words is to say, \”Use your head. And use your heart, too.\”

Those interested in digging more specifically into the legacy of Christ\’s work on econometrics might usefully begin with the March-April 1998 issue of the Journal of Econometrics, which is a special issue devoted to \”Studies in Econometrics in Honor of Carl F. Christ.\” The issue isn\’t freely available online, but for a taste, here are the paper title and authors:

  • \”Editor\’s introduction: studies in econometrics in honor of Carl F. Christ,\” by Lawrence R Klein
  • \”Econometric implications of the government budget constraint,\” by Christopher A Sims
  • \”Impulse response and forecast error variance asymptotics in nonstationary VARs,\” by Peter C.B Phillips
  • \”Business cycle analysis without much theory: A look at structural VARs,\” by Thomas F Cooley and Mark Dwyer
  • \”Lending cycles,\” by Patrick K Asea and Brock Blomberg
  • \”Quasi-rational expectations, an alternative to fully rational expectations: An application to US beef cattle supply,\” by Marc Nerlove and Ilaria Fornari
  • \”Identification and Kullback information in the GLSEM,\” by Phoebus J Dhrymes
  • \”The finite sample properties of simultaneous equations\’ estimates and estimators Bayesian and non-Bayesian approaches,\” by Arnold Zellner
  • \”Model specification and endogeneity,\” by Alice Nakamura and Masao Nakamura
  • \”Finite sample moments results for the quasi-FIML estimator of the reduced form: The linear case,\” by Michael D McCarthy
  • \”Nonlinear and non-Gaussian state-space modeling with Monte Carlo simulations,\” by Hisashi Tanizaki and Roberto S Mariano
  • \”Heterogeneous information arrival and option pricing,\” by Patrick K Asea and Mthuli Ncube
  • \”The detection and estimation of long memory in stochastic volatility,\” by F.Jay Breidt, Nuno Crato, and Pedro de Lima
  • \”Rational expectations, inflation and the nominal interest rate,\” by Jean A Crockett

Spring 2017 Journal of Economic Perspectives Available On-line

For the past 30 years, my actual paid job (as opposed to my blogging hobby) has been Managing Editor of the Journal of Economic Perspectives. The journal is published by the American Economic Association, which back in 2011 decided–much to my delight–that the journal would be freely available on-line, from the current issue back to the first issue in 1987. Here, I\’ll start with Table of Contents for the just-released Spring 2017 issue. Below that are abstracts and direct links for all of the papers. I will almost certainly blog about some of the individual papers in the next week or two, as well.

________________________

Symposium: Recent Ideas in Econometrics\”The State of Applied Econometrics: Causality and Policy Evaluation,\” by Susan Athey and Guido W. Imbens

In this paper, we discuss recent developments in econometrics that we view as important for empirical researchers working on policy evaluation questions. We focus on three main areas, in each case, highlighting recommendations for applied work. First, we discuss new research on identification strategies in program evaluation, with particular focus on synthetic control methods, regression discontinuity, external validity, and the causal interpretation of regression methods. Second, we discuss various forms of supplementary analyses, including placebo analyses as well as sensitivity and robustness analyses, intended to make the identification strategies more credible. Third, we discuss some implications of recent advances in machine learning methods for causal effects, including methods to adjust for differences between treated and control units in high-dimensional settings, and methods for identifying and estimating heterogenous treatment effects.
Full-Text Access | Supplementary Materials

\”The Use of Structural Models in Econometrics,\” by Hamish Low and Costas Meghir

This paper discusses the role of structural economic models in empirical analysis and policy design. The central payoff of a structural econometric model is that it allows an empirical researcher to go beyond the conclusions of a more conventional empirical study that provides reduced-form causal relationships. Structural models identify mechanisms that determine outcomes and are designed to analyze counterfactual policies, quantifying impacts on specific outcomes as well as effects in the short and longer run. We start by defining structural models, distinguishing between those that are fully specified and those that are partially specified. We contrast the treatment effects approach with structural models, and present an example of how a structural model is specified and the particular choices that were made. We cover combining structural estimation with randomized experiments. We then turn to numerical techniques for solving dynamic stochastic models that are often used in structural estimation, again with an example. The penultimate section focuses on issues of estimation using the method of moments.
Full-Text Access
| Supplementary Materials

\”Twenty Years of Time Series Econometrics in Ten Pictures,\” by James H. Stock and Mark W. Watson

This review tells the story of the past 20 years of time series econometrics through ten pictures. These pictures illustrate six broad areas of progress in time series econometrics: estimation of dynamic causal effects; estimation of dynamic structural models with optimizing agents (specifically, dynamic stochastic equilibrium models); methods for exploiting information in \”big data\” that are specialized to economic time series; improved methods for forecasting and for monitoring the economy; tools for modeling time variation in economic relationships; and improved methods for statistical inference. Taken together, the pictures show how 20 years of research have improved our ability to undertake our professional responsibilities. These pictures also remind us of the close connection between econometric theory and the empirical problems that motivate the theory, and of how the best econometric theory tends to arise from practical empirical problems.
Full-Text Access | Supplementary Materials

\”Machine Learning: An Applied Econometric Approach,\” by Sendhil Mullainathan and Jann Spiess
Machines are increasingly doing \”intelligent\” things. Face recognition algorithms use a large dataset of photos labeled as having a face or not to estimate a function that predicts the presence y of a face from pixels x. This similarity to econometrics raises questions: How do these new empirical tools fit with what we know? As empirical economists, how can we use them? We present a way of thinking about machine learning that gives it its own place in the econometric toolbox. Machine learning not only provides new tools, it solves a different problem. Specifically, machine learning revolves around the problem of prediction, while many economic applications revolve around parameter estimation. So applying machine learning to economics requires finding relevant tasks. Machine learning algorithms are now technically easy to use: you can download convenient packages in R or Python. This also raises the risk that the algorithms are applied naively or their output is misinterpreted. We hope to make them conceptually easier to use by providing a crisper understanding of how these algorithms work, where they excel, and where they can stumble—and thus where they can be most usefully applied.
Full-Text Access | Supplementary Materials

\”Identification and Asymptotic Approximations: Three Examples of Progress in Econometric Theory,\” by James L. Powell
In empirical economics, the size and quality of datasets and computational power has grown substantially, along with the size and complexity of the econometric models and the population parameters of interest. With more and better data, it is natural to expect to be able to answer more subtle questions about population relationships, and to pay more attention to the consequences of misspecification of the model for the empirical conclusions. Much of the recent work in econometrics has emphasized two themes: The first is the fragility of statistical identification. The other, related theme involves the way economists make large-sample approximations to the distributions of estimators and test statistics. I will discuss how these issues of identification and alternative asymptotic approximations have been studied in three research areas: analysis of linear endogenous regressor models with many and/or weak instruments; nonparametric models with endogenous regressors; and estimation of partially identified parameters. These areas offer good examples of the progress that has been made in econometrics.
Full-Text Access | Supplementary Materials

\”Undergraduate Econometrics Instruction: Through Our Classes, Darkly,\” by Joshua D. Angrist and Jörn-Steffen Pischke
\”The past half-century has seen economic research become increasingly empirical, while the nature of empirical economic research has also changed. In the 1960s and 1970s, an empirical economist\’s typical mission was to \”explain\” economic variables like wages or GDP growth. Applied econometrics has since evolved to prioritize the estimation of specific causal effects and empirical policy analysis over general models of outcome determination. Yet econometric instruction remains mostly abstract, focusing on the search for \”true models\” and technical concerns associated with classical regression assumptions. Questions of research design and causality still take a back seat in the classroom, in spite of having risen to the top of the modern empirical agenda. This essay traces the divergent development of econometric teaching and empirical practice, arguing for a pedagogical paradigm shift.\”
Full-Text Access | Supplementary Materials

Symposium: Are Measures of Economic Growth Biased?

\”Underestimating the Real Growth of GDP, Personal Income, and Productivity,\” by Martin Feldstein
Economists have long recognized that changes in the quality of existing goods and services, along with the introduction of new goods and services, can raise grave difficulties in measuring changes in the real output of the economy. But despite the attention to this subject in the professional literature, there remains insufficient understanding of just how imperfect the existing official estimates actually are. After studying the methods used by the US government statistical agencies as well as the extensive previous academic literature on this subject, I have concluded that, despite the various improvements to statistical methods that have been made through the years, the official data understate the changes of real output and productivity. The official measures provide at best a lower bound on the true real growth rate with no indication of the size of the underestimation. In this essay, I briefly review why national income should not be considered a measure of well-being; describe what government statisticians actually do in their attempt to measure improvements in the quality of goods and services; consider the problem of new products and the various attempts by economists to take new products into account in measuring overall price and output changes; and discuss how the mismeasurement of real output and of prices might be taken into account in considering various questions of economic policy.
Full-Text Access | Supplementary Materials

\”Challenges to Mismeasurement Explanations for the US Productivity Slowdown,\” by Chad Syverson
The United States has been experiencing a slowdown in measured labor productivity growth since 2004. A number of commentators and researchers have suggested that this slowdown is at least in part illusory because real output data have failed to capture the new and better products of the past decade. I conduct four disparate analyses, each of which offers empirical challenges to this \”mismeasurement hypothesis.\” First, the productivity slowdown has occurred in dozens of countries, and its size is unrelated to measures of the countries\’ consumption or production intensities of information and communication technologies (ICTs, the type of goods most often cited as sources of mismeasurement). Second, estimates from the existing research literature of the surplus created by internet-linked digital technologies fall far short of the $3 trillion or more of \”missing output\” resulting from the productivity growth slowdown. Third, if measurement problems were to account for even a modest share of this missing output, the properly measured output and productivity growth rates of industries that produce and service ICTs would have to have been multiples of their measured growth in the data. Fourth, while measured gross domestic income has been on average higher than measured gross domestic product since 2004—perhaps indicating workers are being paid to make products that are given away for free or at highly discounted prices—this trend actually began before the productivity slowdown and moreover reflects unusually high capital income rather than labor income (i.e., profits are unusually high). In combination, these complementary facets of evidence suggest that the reasonable prima facie case for the mismeasurement hypothesis faces real hurdles when confronted with the data.
Full-Text Access | Supplementary Materials

\”How Government Statistics Adjust for Potential Biases from Quality Change and New Goods in an Age of Digital Technologies: A View from the Trenches,\” by Erica L. Groshen, Brian C. Moyer, Ana M. Aizcorbe, Ralph Bradley and David M. Friedman
A key economic indicator is real output. To get this right, we need to measure accurately both the value of nominal GDP (done by Bureau of Economic Analaysis) and key price indexes (done mostly by Bureau of Labor Statisticcs). All of us have worked on these measurements while at the BLS and the BEA. In this article, we explore some of the thorny statistical and conceptual issues related to measuring a dynamic economy. An often-stated concern is that the national economic accounts miss some of the value of some goods and services arising from the growing digital economy. We agree that measurement problems related to quality changes and new goods have likely caused growth of real output and productivity to be understated. Nevertheless, these measurement issues are far from new, and, based on the magnitude and timing of recent changes, we conclude that it is unlikely that they can account for the pattern of slower growth in recent years. First we discuss how the Bureau of Labor Statistics currently adjusts price indexes to reduce the bias from quality changes and the introduction of new goods, along with some alternative methods that have been proposed. We then present estimates of the extent of remaining bias in real GDP growth that stem from potential biases in growth of consumption and investment. And we take a look at potential biases that could result from challenges in measuring nominal GDP, including those involving the digital economy. Finally, we review ongoing work at BLS and BEA to reduce potential biases and further improve measurement.
Full-Text Access | Supplementary Materials

Articles
\”Social Media and Fake News in the 2016 Election,\” by Hunt Allcott and Matthew Gentzkow
Following the 2016 US presidential election, many have expressed concern about the effects of false stories (\”fake news\”), circulated largely through social media. We discuss the economics of fake news and present new data on its consumption prior to the election. Drawing on web browsing data, archives of fact-checking websites, and results from a new online survey, we find: 1) social media was an important but not dominant source of election news, with 14 percent of Americans calling social media their \”most important\” source; 2) of the known false news stories that appeared in the three months before the election, those favoring Trump were shared a total of 30 million times on Facebook, while those favoring Clinton were shared 8 million times; 3) the average American adult saw on the order of one or perhaps several fake news stories in the months around the election, with just over half of those who recalled seeing them believing them; and 4) people are much more likely to believe stories that favor their preferred candidate, especially if they have ideologically segregated social media networks.
Full-Text Access | Supplementary Materials

\”Yuliy Sannikov: Winner of the 2016 Clark Medal,\” Susan Athey and Andrzej Skrzypacz
Yuliy Sannikov is an extraordinary theorist who has developed methods that offer new insights in analyzing problems that had seemed well-studied and familiar: for example, decisions that might bring about cooperation and/or defection in a repeated-play prisoner\’s dilemma game, or that affect the balance of incentives and opportunism in a principal-agent relationship. His work has broken new ground in methodology, often through the application of stochastic calculus methods. The stochastic element means that his work naturally captures situations in which there is a random chance that monitoring, communication, or signaling between players is imperfect. Using calculus in the context of continuous-time games allows him to overcome tractability problems that had long hindered research in a number of areas. He has substantially altered the toolbox available for studying dynamic games. This essay offers an overview of Sannikov\’s research in several areas.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

Adam Smith on Beggar Thy Neighbor

Beggar-my-neighbor is a simple card game which according to reference in the Oxford English Dictionary dates back at least to the early 1700s. In 1776, Adam Smith applied the expression to trade policy. The often-quoted short passage from Book IV, Chapter III of the The Wealth of Nations reads.

\”[N]ations have been taught that their interest consisted in beggaring all their neighbours. Each nation has been made to look with an invidious eye upon the prosperity of all the nations with which it trades, and to consider their gain as its own loss. Commerce, which ought naturally to be, among nations, as among individuals, a bond of union and friendship, has become the most fertile source of discord and animosity.\”

Here\’s some of the broader argument surrounding Smith\’s particular comment. A key underlying theme for Smith is that attempts to limit trade are, pretty much by definition, attempts to limit competition. As a result, such attempts favor the producers (Smith calls them \”monopolists\”) who have less need to face competition, but impose costs on consumers of goods. Smith also discusses that the economic strength of a neighboring nation may be \”dangerous in war and politics,\” but beneficial economically because it offers more opportunity for trade. Finally, Smith points out that dire warnings of the dangers of international trade were prominent, then as now, but when you look across countries, the places more open to international trade typically tend to be better-off, not worse-off. Smith wrote:

\”Nothing, however, can be more absurd than this whole doctrine of the balance of trade, upon which, not only these restraints, but almost all the other regulations of commerce are founded. When two places trade with one another, this doctrine supposes that, if the balance be even, neither of them either loses or gains; but if it leans in any degree to one side, that one of them loses and the other gains in proportion to its declension from the exact equilibrium. Both suppositions are false. A trade which is forced by means of bounties and monopolies may be and commonly is disadvantageous to the country in whose favour it is meant to be established … But that trade which, without force or constraint, is naturally and regularly carried on between any two places is always advantageous, though not always equally so, to both.

By advantage or gain, I understand not the increase of the quantity of gold and silver, but that of the exchangeable value of the annual produce of the land and labour of the country, or the increase of the annual revenue of its inhabitants. … 

It is a losing trade, it is said, which a workman carries on with the alehouse; and the trade which a manufacturing nation would naturally carry on with a wine country may be considered as a trade of the same nature. I answer, that the trade with the alehouse is not necessarily a losing trade. In its own nature it is just as advantageous as any other, though perhaps somewhat more liable to be abused. The employment of a brewer, and even that of a retailer of fermented liquors, are as necessary divisions of labour as any other. It will generally be more advantageous for a workman to buy of the brewer the quantity he has occasion for, than to brew it himself, and if he is a poor workman, it will generally be more advantageous for him to buy it, by little and little, of the retailer than a large quantity of the brewer. He may no doubt buy too much of either, as he may of any other dealers in his neighbourhood, of the butcher, if he is a glutton, or of the draper, if he affects to be a beau among his companions. It is advantageous to the great body of workmen, notwithstanding, that all these trades should be free, though this freedom may be abused in all of them, and is more likely to be so, perhaps, in some than in others. Though individuals, besides, may sometimes ruin their fortunes by an excessive consumption of fermented liquors, there seems to be no risk that a nation should do so.  …

By such maxims as these, however, nations have been taught that their interest consisted in beggaring all their neighbours. Each nation has been made to look with an invidious eye upon the prosperity of all the nations with which it trades, and to consider their gain as its own loss. Commerce, which ought naturally to be, among nations, as among individuals, a bond of union and friendship, has become the most fertile source of discord and animosity. … 

That it was the spirit of monopoly which originally both invented and propagated this doctrine cannot be doubted; and they who first taught it were by no means such fools as they who believed it. In every country it always is and must be the interest of the great body of the people to buy whatever they want of those who sell it cheapest. The proposition is so very manifest that it seems ridiculous to take any pains to prove it; nor could it ever have been called in question had not the interested sophistry of merchants and manufacturers confounded the common sense of mankind. Their interest is, in this respect, directly opposite to that of the great body of the people. As it is the interest of the freemen of a corporation to hinder the rest of the inhabitants from employing any workmen but themselves, so it is the interest of the merchants and manufacturers of every country to secure to themselves the monopoly of the home market. Hence in Great Britain, and in most other European countries, the extraordinary duties upon almost all goods imported by alien merchants. Hence the high duties and prohibitions upon all those foreign manufactures which can come into competition with our own. Hence, too, the extraordinary restraints upon the importation of almost all sorts of goods from those countries with which the balance of trade is supposed to be disadvantageous; that is, from those against whom national animosity happens to be most violently inflamed.

The wealth of a neighbouring nation, however, though dangerous in war and politics, is certainly advantageous in trade. In a state of hostility it may enable our enemies to maintain fleets and armies superior to our own; but in a state of peace and commerce it must likewise enable them to exchange with us to a greater value, and to afford a better market, either for the immediate produce of our own industry, or for whatever is purchased with that produce. As a rich man is likely to be a better customer to the industrious people in his neighbourhood than a poor, so is likewise a rich nation. A rich man, indeed, who is himself a manufacturer, is a very dangerous neighbour to all those who deal in the same way. All the rest of the neighbourhood, however, by far the greatest number, profit by the good market which his expence affords them. They even profit by his underselling the poorer workmen who deal in the same way with him. The manufacturers of a rich nation, in the same manner, may no doubt be very dangerous rivals to those of their neighbours. This very competition, however, is advantageous to the great body of the people, who profit greatly besides by the good market which the great expence of such a nation affords them in every other way. Private people who want to make a fortune never think of retiring to the remote and poor provinces of the country, but resort either to the capital, or to some of the great commercial towns. They know that where little wealth circulates there is little to be got, but that where a great deal is in motion, some share of it may fall to them. The same maxims which would in this manner direct the common sense of one, or ten, or twenty individuals, should regulate the judgment of one, or ten, or twenty millions, and should make a whole nation regard the riches of its neighbours as a probable cause and occasion for itself to acquire riches. A nation that would enrich itself by foreign trade is certainly most likely to do so when its neighbours are all rich, industrious, and commercial nations. 

It is in consequence of these maxims that the commerce between France and England has in both countries been subjected to so many discouragements and restraints. If those two countries, however, were to consider their real interest, without either mercantile jealousy or national animosity, the commerce of France might be more advantageous to Great Britain than that of any other country, and for the same reason that of Great Britain to France. France is the nearest neighbour to Great Britain. In the trade between the southern coast of England and the northern and north-western coasts of France, the returns might be expected, in the same manner as in the inland trade, four, five, or six times in the year. The capital, therefore, employed in this trade could in each of the two countries keep in motion four, five, or six times the quantity of industry, and afford employment and subsistence to four, five, or six times the number of people, which an equal capital could do in the greater part of the other branches of foreign trade. …
But the very same circumstances which would have rendered an open and free commerce between the two countries so advantageous to both, have occasioned the principal obstructions to that commerce. Being neighbours, they are necessarily enemies, and the wealth and power of each becomes, upon that account, more formidable to the other; and what would increase the advantage of national friendship serves only to inflame the violence of national animosity. They are both rich and industrious nations; and the merchants and manufacturers of each dread the competition of the skill and activity of those of the other. Mercantile jealousy is excited, and both inflames, and is itself inflamed, by the violence of national animosity: And the traders of both countries have announced, with all the passionate confidence of interested falsehood, the certain ruin of each, in consequence of that unfavourable balance of trade, which, they pretend, would be the infallible effect of an unrestrained commerce with the other.

There is no commercial country in Europe of which the approaching ruin has not frequently been foretold by the pretended doctors of this system from an unfavourable balance of trade. After all the anxiety, however, which they have excited about this, after all the vain attempts of almost all trading nations to turn that balance in their own favour and against their neighbours, it does not appear that any one nation in Europe has been in any respect impoverished by this cause. Every town and country, on the contrary, in proportion as they have opened their ports to all nations, instead of being ruined by this free trade, as the principles of the commercial system would lead us to expect, have been enriched by it. …

Some years back I was giving some talks in South Africa, and found myself in a back-and-forth with a group of sharp and thoughtful students about whether the US was better off if other countries of the world, like South Africa, were poor or rich. The sense of many of the students was that the US wanted other countries to remain poor, so that the US could import goods produced by low-cost labor. I argued that for all the back-and-forth of US policy over time, the main thrust of policies like leading the way in the GATT and the WTO was an underlying belief that the US economy benefited when other countries were better off. Channeling Adam Smith, I found myself asking the group: \”If you want to be economically successful, would you prefer to live in a rich neighborhood or a poor one?\” I doubt that I changed anyone\’s mind, but at least the question led to a pause in the conversation, in which I could almost hear ideas being recalibrated and reframed.

Furman Reflects on Eight Years of Economic Policy-Making

Jason Furman was an economic policy insider through all eight years of the Obama administration, with \”oughly the first half of them as Deputy Director of the National Economic Council and the second half of them as Chairman of the Council of Economic Advisers.\” In the Arnold C. Harberger Distinguished Lecture on Economic Development at the UCLA Burkle Center for International Relations, Furman reflects on \”The Role of Economists in Economic Policymaking\” (April 27, 2017).

Furman has a pleasantly wry and discursive tone, with lots of anecdotes that mix together stories of success and failure. At least for readers like me, it\’s a tone that carries far more appeal and persuasive power than feverish oratory set to the sound of trumpets. For example, here are some of Furman\’s thoughts on the four things that economists have to offer when the answer to a policy question is not known.

The first is just describing the data. Describing the data does not tell you what caused what. It does not tell you what is the right policy or what is the wrong policy. But it can help you at least figure out what questions you should be asking, what areas you should be looking at to solve those questions, and what you can do about them. The data can be complicated. The government released two different measures of economic growth in the fourth quarter of 2016, one was 1.0 percent and one as 2.1 percent. The government also released two different measures of job growth in March 2017, 98,000 jobs and 472,000 jobs. Moreover, for the first quarter of this year the data tells a divergent story—with strong employment growth and strong “soft data”, like surveys of confidence, but much weaker GDP growth and weaker “hard data”, like actual sales numbers. I have spent a lot of time on these issues and developed a number of rules of thumb, almost all of which can be summarized by saying that you should look at a wide range of data, ideally smoothed over a longer period of time—without placing too much weight on any given indicator … Data description can be more sophisticated than just looking at single numbers or even trends. Sometimes it helps to decompose a number into its components to identify what is driving it, at least in an arithmetic sense. 
The second set of techniques we use is economic theory. Economic theory can sometimes give you a very helpful answer to a question. One of the biggest insights in economics is that some items are more valuable to one person than to another person, and if those two people trade things, they can both be better off. These are the basic motivations for a market economy and the basis of the argument for expanding international trade. … Let me give you one: the example of the allocation of electromagnetic spectrum …  Spectrum is a scarce resource and in many cases the rights to use it were allocated decades ago. Today, much of the best spectrum is reserved for the exclusive use of television broadcasters while users of smartphones and tablets often face frustrating and economically costly delays in accessing data because of crammed airwaves. In the Los Angeles area, for example, there were more than 25 broadcast stations, some of them with only a handful of viewers, most of whom could watch the shows on cable, online or through other means. A station with only a few thousand viewers might be worth $5 million, but at the same time may own a license for spectrum that a mobile broadband provider would be willing to buy for $50 million. We proposed to deal with this by setting up an incentive auction. Anyone who is a television broadcaster who wants to sell their spectrum can do so, and anyone who is a mobile broadband provider or anything else and wants to buy that spectrum can do so. The auction is entirely voluntary—a station will only sell if it is better off with cash than with spectrum, a mobile broadband provider will only buy if it (and presumably its consumers) benefit more from the spectrum than the cash they pay, and taxpayers get a cut of the difference in the bids to reflect the government’s role in organizing the process, including repackaging the spectrum into contiguous blocks to make it more valuable. Economic theory was enough to motivate and support this proposal—which ultimately resulted in $5 billion for taxpayers as well as profits for television broadcasters, mobile broadband providers, and benefits for their consumers. … In the last two decades, one of the fastest growing areas of economics has been “behavioral economics,” which relaxes the standard assumption that people are fully rational, and instead pays close attention to the ways people can be myopic, make decisions that depend on framing, and have limited attention spans or ability to incorporate information. One of the big successes of labor economics has been getting policy to focus less on the rate of return to savings and more on making it easier and more automatic to save. 
The third set of techniques we use is empirical work, trying to understand the causes and effects of different economic phenomena. I want to give you an example of a mistake I was involved in because I did not think hard about causation. At the end of 2008, I was working with Congress on legislation to raise the tax on tobacco products in order to pay for an expansion of the Children’s Health Insurance Program (CHIP). The main proposal was to raise the tax on a pack of cigarettes from $0.39 per pack to $1.01 per pack. But we also needed to set tax rates on a wide range of other tobacco products including roll-your-own tobacco, pipe tobacco, small cigars, large cigars, and more. Amidst everything that was going on at the end of 2008 with the Great Recession I did not pay enough attention to this issue, even though I once sat through what felt like an endless meeting on the topic. What came out of that meeting was a proposal to raise the tax rate on roll-your-own tobacco by more than $20 a pound while leaving the tax rate on pipe tobacco largely unchanged. What followed was a huge decline in the sale of roll-your-own tobacco and a huge increase in the sale of pipe tobacco … It turns out that roll-your-own tobacco and pipe tobacco are highly substitutable—not because people have shifted to smoking pipes, but because you can still put pipe tobacco in a piece of paper, roll it up, and smoke it. This is not just a minor, technical observation. It turns out to be highly consequential for public health. I have estimated that the 2009 tobacco tax increase will reduce the number of premature deaths due to smoking by between 15,000 and 70,000 for each cohort. But it would have reduced them even more if we had harmonized the tax rate on different tobacco products, as we did in a subsequent proposal. In fact, economists in the Treasury Department estimated that the reduction in tobacco consumption under a harmonization proposal would be nearly two and a half times the size it would be under an increase in the cigarette tax alone that raises comparable revenue. 
 
Fourth, we sometimes combine empirical description, theory, and causation to build models that allow us to simulate the impact of different policies. For example, such simulations can tell you how a tax cut will be distributed or affect the economy, how much a particular health plan will cost per person covered, or how much carbon emissions will change as a result of different policy approaches. While any model is limited and imperfect, models can be especially useful in quantifying plausible tradeoffs in designing a policy and communicating the impact of policies to policymakers.

It\’s interesting to me that of these four approaches, Furman lists basic description of data and basic theoretical insights first, ahead of the more complex modelling approaches. Here are a few more words from Furman:

In the Obama Administration I worked on everything from helping to prevent a second Great Depression to restructuring the post office—although I confess that we were only successful on one of these. Many of the topics I worked on are standard fare for economics, like the minimum wage, international trade, or environmental regulation. Others might be topics you do not associate with economists, like criminal justice reform, immigration, and sanctions. And on none of these did economists fully get their way which, ceteris paribus, is a good thing …

After the election many people expressed frustration that facts and analysis did not matter anymore. Paul Krugman cited the statistic that the three major networks spent a cumulative total of 32 minutes covering policy issues in the 2016 election. … I certainly share the sentiment that facts and analysis play too small a role not just in campaigns but in governing. And I think there is a role for many types of communication and persuasion. For myself, I am not someone who could give a rousing speech at a rally so I will stick to my comparative advantage. And I do believe that the right response to the lack of sufficient weight for facts and analysis is to embrace more facts and analysis.

For those who are interested in the Council of Economic Advisers and the role of academic economists in policy-making, here are a few more links: 

Snapshots of Merger and Acquisition Activity

Here\’s a figure showing seven waves of corporate mergers in the US since the 1851, according to the Institute for Mergers, Acquisitions & Alliances (a not-for-profit think tank that does research and workshops, and runs an educational certificate program, with main offices in New York, Vienna, Zurich, and Ho Chi Minh City). As the figure shows, the first three waves of merges have substantial periods of time between them. The most recent four waves are all since 1985.

Looking more closely at recent years, here is some additional information on these patterns. In this figure, the blue bars (and the left-hand axis) show the number of announced deals in each year, while the red line (and right axis) show the total value of the deals. For example, the number of deals rose from 2011 to 2014, but not all that sharply, while the value of the deals jumped considerably. I

In both this figure, and the one above, the \”wave\” of mergers in the 1980s doesn\’t look to me so much like a separate wave, but rather just a time of transition to the higher levels of mergers that would follow. In addition, from a longer historical perspective, it looks more as if the US economy has entered  period since the 1990s when mergers are generally at high levels–although sometimes higher than others.

For another perspectives, here is the number and value of mergers and acquisitions on a worldwide basis since 1985, which again shows either three \”waves\” or just a generally higher level of mergers since the late 1990s, depending on how you interpret it.

Of course, patterns like these don\’t prove anything by themselves, and I try not to be someone who over-extrapolates. But I\’ll just say that these merger and acquisition patterns are certainly consistent with a vision of the US and the global economy in which large companies seem relatively more focused on spending their funds and their managerial time on financial maneuvers that allow them to reduce their exposure to market pressures (either by combining with direct competitors or combining with firms along the supply chain), and thus relatively less focused on seeking to improve their competitive position by internal investments in capital, skills, and innovation.

"But When, Friend, Dost Thee Think?"

As final exams draw near at many colleges and universities (at least for those on a semester schedule), it seems appropriate to pass along this old story as told by Nicholas Murray Butler, President of Columbia University, at the 143rd Annual banquet of the Chamber of Commerce of the State of New York., 1911 (pp. 43-55), and available through the magic of the HathiTrust Digital Library

\”I cannot help recalling an admirable story which is told of ROBERT SOUTHEY, once Poet Laureate of England. SOUTHEY was boasting to a Quaker friend of how exceedingly well be occupied his time, how he organized it, how he permitted no moment to escape; how every instant was used; how he studied Portuguese while he shaved, and higher mathematics in his bath.

\”And then the Quaker said to him softly: `But when, friend, dost thee think? 

\”My impression is that we need now some time to think, in order that reflection and study of principle, and grasp upon realities, may take the place of perpetual discussion and exposition, partly of what is, partly of what never was, partly of what never can be.\”

Digital Forces and the Other 70% of the US Economy

For those feeling the need for a bracing dose of optimism about the prospects for US productivity growth, I recommend \”The Coming Productivity Boom: Transforming the Physical Economy with Information,\” written by Michael Mandel and Bret Swanson (March 2017), and published by the Technology CEO Council (which is more-or-less what it sounds like, a \”public policy advocacy organization comprising Chief Executive Officers from America’s leading information technology companies, including Akamai, Dell, IBM, Intel, Micron, Oracle, Qualcomm and Xerox.\”)

Mandel and Swanson split the US economy into two big chunk, which they call the \”digital economy\” and the \”physical economy.\” In the digital economy, which accounts for about 30% of US GDP, they argue that investment in information technology and productivity growth are doing fairly well. But in the physical economy, investment in information technology and productivity growth have been low. Thus, they argue that there is potential for rapid productivity growth in more than two-thirds of the US economy.

What are the industries in these two economic groupings? The digital economy is: \”Computer and electronics production; publishing; movies, music, television, and other entertainment; telecom; Internet search and social media; professional and technical services (legal, accounting, computer programming, scientific research, management consulting, design, advertising); finance and insurance; management of companies and enterprises; administrative and support services.\” The physical economy is: \”All other industries, including agriculture; mining; construction; manufacturing (except computers and electronics); transportation and warehousing; wholesale and

retail trade; real estate; education; healthcare; accommodations and food services; recreation.\” Obviously, their notion \”physical\” isn\’t just material goods, but also includes a number of the biggest service industries like health care and education–as long as what is ultimately being delivered is not actually digital in nature. 
After dividing up the economy in this way, here are a couple of the patterns they find. The first figure compares investment in information technology in the digital and physical sector; the other compares productivity growth in these two sectors.
They argue in some detail that with appropriate investments in  information technology, and the associated rethinking of production and output, substantial productivity growth is  possible in manufacturing, transportation, education, health care, wholesale/retail sales and other areas. One recent example is how how information technology, by allowing dramatically mapping of underground geological formations, has boosted production of natural gas from shale. 
What about manufacturing in particular? They write:

Government data shows that most domestic factories have not added much to their stock of information technology equipment and software over the past 10 years. Between 2004 and 2014, manufacturing IT capital stock increased by just $46 billion, and more than 65% of that gain was in the computer and electronics industry. … Leaving out the computer and electronics industry, the capital stock of IT equipment in the rest of manufacturing has barely grown since 2000. The capital stock of software in manufacturing is, likewise, barely higher now than in 2000. …

What about robots? According to trade  association data, 31,000 robots valued at
$1.8 billion were shipped to North American  customers in 2016. The spending on robots
pales next to the $300 billion in industrial equipment and manufacturing buildings that
corporations spent in the United States in 2016. In many cases, robots help the United
States retain jobs. Consider a modern semiconductor fab, which has installed a proprietary network of wafer-handling robots. This system probably reduced the number of wafer-handling jobs by several dozen. Yet the robots allowed the new fab to be built in the United States, instead of in a low-cost overseas location, thus saving or creating some 1,200 high-paying American jobs. 

The upside of robots in manufacturing spreading out into new industries is thus enormous. That’s crucial for increasing the productivity of existing manufacturing processes and  creating new processes altogether. In many ways manufacturing is the classic case where atoms will be boosted by bits. The process is already underway—but the diffusion of the Industrial Internet across the manufacturing sectors will take place over the next two decades. New, IT-enabled product categories, combined with design and customization that increasingly treats manufacturing as a service, will not necessarily bring back “old  jobs” but instead create new and better ones. … 

Economists are constitutionally suspicious of claims that large gains are out there, just waiting to be achieved. As economists like to say, \” If there\’s a $20 bill out there on the sidewalk, why hasn\’t someone already picked it up?\” Mandel and Swanson offer this answer: 

\”Why has it taken so long? It sounds like a tautology, but industries whose output is information are inherently more amenable to digitization. … But when we examine industries whose output is primarily physical, the game gets far more difficult. To digitize a complex physical object such as a spinning jet engine, an unknown natural environment such as a buried oil field, or a rapidly changing manmade environment such as the traffic and work patterns of a large city, requires a level of sophisticated technology that was not available until fairly recently. Low-cost sensors that can be widely distributed; high-bandwidth wireless networks capable of collecting the information from the sensor; computing systems capable of analyzing terabytes of data in real time; artificial vision that can make sense of images and artificial intelligence that can make decisions—each of these are necessary parts of applying IT to the physical industries. Continued advances and price reductions in sensing, cloud computing, and broadband connectivity, combined with new thinking and new focus about how to apply these technologies to physical problems, are finally about to open up the other four-fifths of the economy to the magical laws of Moore and Metcalfe. … Moore’s law, named after Intel founder Gordon Moore, refers to the tendency of silicon microchips to roughly double in cost performance (because of the industry’s remarkable ability to scale transistors and other chip features) every 18 months to two years. … Metcalfe’s law refers to the observation by Ethernet inventor Robert Metcalfe that the power or value of networks rises not by the number of connected nodes but by something resembling the square of the number of nodes. This is one reason “network effects” can be
so powerful.

My own sense is that the answer to \”why has it taken so long\” goes beyond technology. In the first decade of the 20th century, many firms in the physical economy were often focused on how to cut costs by outsourcing or setting up global production chains, The leading providers in industries like health care and education seem exceptionally set in their traditional ways, with practices are hard for either internal executives or external entrepreneurs to uproot. The Great Recession discouraged firms across the economy from investing for a time. But it does seem at least imaginable to me that these patterns are shifting. The most important infrastructure for encouraging economic productivity during  the first half of the 21st century won\’t be isn\’t roads and bridges, but a combination of secure information communication network and a reliable energy supply. 

Mandel in particular has a track record here. Back in the mid-1990s, when he was writing for Business Week, he wrote about the potential for real economic growth in what he called the \”new economy\” (for example, here) before it became evident to everyone else in the official statistics. 

Demand and the Frac Sand Example

Most people who teach economics are on a continual lookout for current examples of supply and demand. Otherwise, you end up falling back on  hypothetical goods like \”widgets\” and \”leets\” (which is \”steel\” spelled backward), at which point you can almost see the life and brightness fade out of the eyes of students.  nice lively example of shifts in demand comes from the demand for sand used in hydraulic fracking operations. As the Wall Street Journal recently reported, \”Latest Threat to U.S. Oil Drillers: The Rocketing Price of Sand: The market for a key ingredient in fracking is again surging,\”
(by Christopher M. Matthews and Erin Ailworth, March 23, 2017).

A couple of years ago, the USGS published \”Frac Sand in the United States—A Geological and Industry Overview,\” by Mary Ellen Benson and Anna B. Wilson, with a section on \”Frac Sand Consumption History\” contributed by Donald I. Bleiwas (Open File Report 2015-1107, posted July 30, 2015). The report includes this useful figure, in which the bars show the metric tons of sand used for fracking (measured on the left axis); the numbers above the bars show the number of horizontal drilling rigs in operation in the US during any given week of the year; and the line shows the value of the sand (right axis).

The basic lesson is fracking is up and it is using a lot more sand. If you look a little more closely at teh years from 2010 to 2012, you can see that the number of horizontal drilling rigs rose from 822 to 1,151, but the quantity of sand being used more than doubled.  This data can be updated a bit. According to the Baker Hughes North American Rotary Rig Count, the number of horizontal rigs dropped in 2015 and stayed fairly at this lower level in 2016, as the price of oil dropped, but more recently the number of horizontal rigs is rising again. 

For an update through 2016 on production levels and price, the USGS publishes an annual fact sheet on various minerals. Fracking sand falls into the category of \”Industrial Sand and Gravel,\” of which more than 70% is used for fracking. Here\’s the relevant table from the 2017 factsheet. Thus, back in 2012 there was about 50 million tons of total sand production, with about 70% of it going to fracking. By 2014, output of sand had doubled–mostly due to increased demand from fracking. The price per ton rose from $52 in 2010 to $106 per ton in 2014. Then output and price sagged in 2015 and 2016.

The Wall Street Journal story reports rising prices for sand this year. It also notes: \”In Louisiana, Chesapeake Energy Corp. recently pumped a record 50.2 million pounds of sand into a horizontal well roughly 1.8 miles long, piquing the interest of some rivals who are now weighing whether they can do the same.\” And here\’s a figure showing quantities of sand being used in different fields.

No world-changing lessons here. The higher prices for sand don\’t seem likely to make much difference in the quantity of fracking, least for now, because sand is a fairly small slice of the overall cost. Environmental concerns are being already being raised in some US locations about the extent of sand-mining, and those concerns are likely to become more acute as the demand for fracking sand rises. But my guess is that if it becomes difficult to increase the supply of fracking sand, then those doing the fracking will either find ways to economize on sand or will figure out some alternative substances that would fill a similar purpose.

Postscript: Some readers might want to revisit my post on  \”Demand for Sand\” (April 16, 2014), which has some additional background on international demand for sand (mostly for concrete and construction purposes) and some environmental effects of sand-mining.

The Global Systemically Important Banks: An Update

\”The large global banks were at the heart of the global financial crisis. In response to the crisis, the international Financial Stability Forum was upgraded to the Financial Stability Board (FSB) in 2009, with the full participation of finance ministers and even heads of government. The newly established FSB then published an integrated set of policy measures, such as capital surcharges and resolution plans, to address the systemic and moral hazard risks associated with global systemically important banks (G-SIBs). Eight years later, it is time to take stock of the impact of these measures. We answer three questions on what happened to the G-SIBs. First, have they shrunk in size? Second, are they better capitalised? Third, and in reference to the reported end of global banking, have they reduced their global reach?\”

Thus writes Dirk Schoenmaker in \”What happened to global banking after the crisis?\” written as a Policy Contribution for the Bruegel think-tank (2017, Number 7), The short answers to his three question are: 1) No; 2) Yes; 3) Yes. 
Schoenmaker offers a list of 33 global systemically important banks–that is, the banks that governments feel compelled to rescue out of fear that their failure could crash the financial systems of a country or two. Some examples of the larger ones include BNP Paribas, Deutsche Bank, and Groupe Crédit Agricole in Europe; Bank of America, JP Morgan Chase, Citigroup, and Wells Fargo in the US; Bank of China, China Construction Bank, and Industrial and Commercial Bank of China, all of course in China; HSBC and Barclays in the UK; and Mitsubishi UFJ FG and Sumitomo Mitsui FG in Japan. 
Here\’s a summary chart, with the top panel showing assets, reserves, and international reach of these 33 G-SIBs in 2007 and in 2015. From the bottom line of the two panels, the total assets of these institutions rose slightly from 2007 to 2015; their capital reserves rose fairly dramatically; and the share of their business done at home rather than regionally or globally declined. 
Schoenmaker has lots more to say about these patterns. For example, assets for these institutions have risen in China, Japan, and the United States, but fallen in the euro area and the UK. Capital reserves are quite a bit higher in the US and China than in the euro area. Bottom line: \”We conclude that reports on the death of global banking are greatly exaggerated.\”

When Restrictions on House-Building Meet Growing Demand: Interview with Joseph Gyourko

For a few years now, Joseph Gyourko has been doing research and writing essays with a focus on housing supply, and in particular, how a collision between local restrictions that hinder home-building combined with growing demand in those localities lead to differences in prices. In an interview with Hites Amir, Gyourko lays out these views in the April 2017 issue of the Global Housing Watch Newsletter. (The newsletter is produced by Amir, who works fo the International Monetary Fund, but it is not an official IMF document).

\”In forthcoming work, Ed Glaeser and I conclude that most housing markets in the interior of the country function so that the price of housing is no more than the sum of its true production costs (the free market price of land plus the cost of putting up the structure) plus a normal entrepreneurial profit for the homebuilder. That is what we teach should happen in our introductory microeconomics courses—namely, that price paid by consumers in the market should equal the real resource cost of producing the good (housing in this case). These well-functioning housing markets exist in a broad swath of the country outside of the Amtrak Corridor in the Northeast (Washington, D.C. to Boston) and the major West Coast markets from Seattle all the way down to San Diego. The bulk of the population lives in these well-functioning markets, by the way. They just are not focused on by the media. …

\”Restrictions began to be imposed in many west coast markets in the 1970s, with east coast markets in the northeast following the next decade. Thus, it is the people who owned in those markets at those times who enjoyed the most appreciation in their homes. Those people tend to be senior citizens today, and even if they do not earn relatively high incomes, they are wealthy because of the real capital gains on their homes. …  

\”If they limit supply sufficiently relative to demand, then the existing housing units will be rationed by price. Richer households will be able to bid more to live in their preferred markets, so we get the sorting of the rich into the higher priced coastal markets. … The divergence in home prices between low cost, elastically supplied markets and high cost, inelastically supplied markets has been growing over time—since 1950, at least. …  This can generate a spiral up in prices, and it can last a long time. In 1950 for example, housing in the most expensive metropolitan areas was about twice as costly as in the average market. At the turn of the century in 2000, the most expensive metros were at least four times more costly than the average market. I expect that gap to widen over the coming decade. …

\”In a well-functioning market with elastic supply, prices should be equal to fundamental production costs. In most American markets, that is no more than $200,000-$250,000.\”

 Here\’s a figure, which is a version of a diagram that appears in \”Superstar Cities,\” by Joseph Gyourko, Christopher Mayer, and  Todd Sinai in the American Economic Journal: Economic Policy, November 2013 (5:4, pp. 167-99). The figure shows what the distribution of the average house value across US metropolitan statistical areas (MSAs) in 1950 and 2000. Average prices are higher in 2000 than in 1950, which isn\’t surprising, since the average house is bigger, too. But the key insight here is that the distribution of average housing prices across cities in 1950 is more bunched together, while the distribution across cities in 2000 is much wider, with a long right-hand-tail.

Fig2

This wider distribution of housing prices across cities has broader trade-offs. It becomes harder for those living in cities with lower housing prices to relocate to cities with higher housing prices. It creates a greater level of economic segregation, because those with higher incomes are more likely to end up living in a smaller number of cities where they can afford the high housing prices, while those with middle-level or lower-level incomes struggle to find alternatives.  It adds a dose of bitterness to local arguments over zoning, because current owners of high-priced houses (whether they have owned a house for decades or bought more recently) live in fear that an increase in the supply of housing–or even building more moderately-sized and -price housing–might in some way limit whether their own home will keep rising in price. Gyourko says:

\”I am a traditional housing and urban economist in this regard. New construction imposes real costs on its immediate neighbors and the broader community. Pollution is one, with congestion (on the roads, schools, etc.) being another. It is economically efficient to internalize these negative externalities by ‘taxing’ the developer in some way so the full costs of development are priced and paid for by the builder. What I am not in favor of is excessive regulation that imposes costs much higher than could be justified by spillovers. That results in too little development and creates affordability problems for the poor and the middle class.\”

In this hyperpartisan time in which we live, I feel compelled to add that the goal of rolling back local restrictions that limit house-building is advocated not just by the usual free-market suspects who tend to lean right politically, but was also supported by the Obama administration. For example, President Obama said in a speech to the US Conference of Mayors in January 2016: \”We can work together to break down rules that stand in the way of building new housing and that keep families from moving to growing, dynamic cities.\” The Obama administration also published a \”Housing Development Toolkit\” (September 2016) to provide strategies for overcoming excessive local barriers to home-building.