Hydraulic Models of the Economy: Phillips, Fisher, Financial Plumbing

Part of the lore of earlier economists as it was passed down to me around the campfire back in the Neolithic era  is the story of how Alban William Housego (Bill) Phillips, the originator of the famous 1956 paper that drew the \”Phillips curve\” tradeoff between unemployment and inflation, also built a hydraulic economic model: that is, a physical model of the economy in which flows of consumption, saving, investment and other economic forces were represented by liquid moving through tubes and pipes. What I hadn\’t known until more recently is that Irving Fisher also created an hydraulic model of the economy.

As a starting point for background on Bill Phillips and his famous 1956 Phillips curve paper, I can recommend the article by A.G. Sleeman called  \”Retrospectives: The Phillips Curve: A Rushed Job?\” which appeared in the Winter 2011 issue of my own Journal of Economic Perspectives. (Like all articles in JEP from the current issue back to the first issue in 1987, it is freely available on-line compliments of the American Economic Association.) The hydraulic computer is not the main focus of Sleeman\’s article, but he provides evidence that it was a major part of Phillips\’ career.

Apparently, Phillips completed his undergraduate degree at the London School of Economics in June 1949, specializing in sociology, and receiving only a \”Pass.\” From there, Sleeman writes (footnotes and references omitted:

\”In 1950, despite his poor degree, Phillips was appointed an Assistant Lecturer in the Department of Economics at LSE at the top of the pay scale, and simultaneously began his Ph.D. studies. The reason was that by 1949, Phillips had built the MONIAC: Monetary National Income Analogue Computer. (The name is a play on ENIAC, the Electronic Numerical Integrator and Computer, which had been announced in 1946 as the fifi rst general-purpose electronic computer.) MONIAC was a hydraulic machine, made of transparent plastic pipes and tanks fastened to a wooden board, about six feet high, four feet wide, and three feet deep. The MONIAC used colored water to represent the stocks and flows of an IS–LM style model and simulated how the model behaved as monetary and fiscal variables varied. The MONIAC brought Phillips to the attention of James Meade and, ultimately, to Lionel Robbins and other members of the LSE economics department. In 1950, Phillips published a paper on MONIAC in the August issue of Economica eight months after failing his Applied Economics and Economic History exams and passing Principles by a single point. In October 1951, the 36 year-old Phillips was promoted to Lecturer and tenured having published only one paper. Over the next two years Phillips completed his doctorate…\”

Phillips was born in New Zealand, and a December 2007 article in Reserve Bank of New Zealand Bulletin discusses the MONIAC. Here\’s a photo of Phillips standing beside the MONIAC at LSE, and a more recent photo of a MONIAC that the New Zealand central bank received from LSE and has restored to working order.

In the New Zealand central bank publication, Tim Ng and Matthew Wright describe the functioning of MONIAC this way: \”Separate water tanks represent households, business, government, exporting and importing sectors of the economy. Coloured water pumped around the system measures income, spending and GDP. The system is programmable and capable of solving nine simultaneous equations in response to any change of the parameters, to reach a new equilibrium. A plotter can record
changes in the trade balance, GDP and interest rates on paper. Simulation experiments with fiscal policy, monetary policy and exchange rates can be carried out. Although the MONIAC was conceived as a teaching tool, it is also capable of generating economic forecasts. Phillips himself
used the MONIAC as a teaching tool at the London School of Economics. Around 14 machines were built …\” For those with an insatiable need to know more about the MONIAC, I recommend this special December 2011 issue of Economia Politica: Journal of Analytical and Institutional Economics, which includes about a dozen articles about the lasting influence of Phillips and the MONIAC.  

But until recently, I hadn\’t known about that Irving Fisher had also built a hydraulic model of the economy as part of his doctoral dissertation back in 1891. I learned about it in the article by Robert W. Dimand and Rebeca Gomez Betancourt in the most recent issue of my own JEP. Their article is primarily focused, as the title notes on \”Irving Fisher’s Appreciation and Interest (1896) and the Fisher Relation.\” But in their capsule overview of Fisher\’s life, they write (citations and footnotes omitted):

\”His 1891 doctoral dissertation in mathematics and political economy (Yale’s first Ph.D. in political economy or economics), which was published as Mathematical Investigations in the Theory of Value and Prices (1892), brought general equilibrium analysis to North America; it was supervised jointly by the physicist and engineer J. Willard Gibb and the economist and sociologist William Graham Sumner. Paul Samuelson once described Fisher (1892) as the “greatest doctoral dissertation in economics ever written” … because Fisher invented general equilibrium analysis for himself before his last minute discovery of the writings of Léon Walras and Francis Ysidro Edgeworth. Fisher’s thesis went beyond these writings in one striking respect: influenced by Gibbs’s work in mechanics, Fisher not only imagined but actually built a hydraulic mechanism to simulate the determination of equilibrium prices and quantities—in effect, a hydraulic computer in the days before electronic computers …\”

Oddly enough, Fisher also wrote a paper that is an early harbinger of the Phillips curve literature. As Dimand and Betancourt write: \”In a series of articles, Fisher correlated distributed lags of price
level changes with economic activity and unemployment. His article “A Statistical Relationship between Unemployment and Price Level Changes” (1926 [1973]), little noticed when first published by the International Labour Office, attracted rather more attention when reprinted almost 50 years later in the Journal of Political Economy as “Lost and Found: I Discovered the Phillips Curve—Irving Fisher.”

I\’m not aware of any working models of Fisher\’s hydraulic computer, nor of any photographs of a working model. But  back in 2000, William C. Brainard and Herbert E. Scarf took on the task of investigating how the model worked in \”How to Compute Equilibrium Prices in 1891.\”  They reprint these sketches of Fisher\’s hydraulic computer from his dissertation. It apparently consisted of a series of cisterns, rods, floats, bellows, and tubes. It represents three consumers and three goods that they consume.

Apparently, Fisher used his hydraulic model of the economy as a teaching tool for 25 years. Brainerd and Scarf write (references omitted):

\”Fisher regarded his model as \”the physical analog of the ideal economic market,\” with the virtue that \”The elements which contribute to the determination of prices are represented each with its appropriate role and open to the scrutiny of the eye …\” providing a \”clear and analytical picture of the interdependence  of the  many elements in the causation of prices … \” Fisher also saw the machine as a way of demonstrating comparative static results, \”… to employ the mechanism as an instrument of investigation and by it, study some complicated variations which could scarcely be successfully followed without its aid. … 

\”Although we do not know what experiments Fisher actually ran with his machine, he does describe eight comparative static exercises. Some of these illustrate basic features of the system, for example that proportional increases in money incomes result in an equal proportional increase of each price, with no change in the allocation of goods. Another simple exercise discussed by Fisher examines whether proportional increases in the endowment of goods necessarily result in proportional decreases in prices, as was apparently, and incorrectly, believed by Mill. Some exercises illustrate less intuitive properties of exchange economies: increasing one individual\’s income may make some other individual better off, and also the possibility of `immiserating growth,\’ i.e. increasing an individuals endowment of a good may actually lower his welfare.\”

The idea of a hydraulic computer seems anachronistic in these days of electronic computation, but I can imagine that an illustrative teaching tool, watching flows of liquid rebalance might be at least as useful as looking at a professor sketching a supply and demand diagram. In addition, the notion of the economy as a hydraulic set of forces still has considerable rhetorical power. We talk about \”liquidity\” and \”bubbles.\” The Federal Reserve publishes \”Flow of Funds\” accounts for the U.S. economy. When economists talk about the financial crisis of 2008 and 2009, they sometime talk in terms of financial \”plumbing.\” For example, here\’s Darrel Duffie:

\”And there has been a lot of progress made, but I do feel that we’re looking at years of work to improve the plumbing, the infrastructure. And what I mean by that are institutional features of how our financial markets work that can’t be adjusted in the short run by discretionary behavior. They’re just there or they’re not. It’s a pipe that exists or it’s a pipe that’s not there. And if those pipes are too small or too fragile and therefore break, the ability of the financial system to serve its function in the macroeconomy … is broken. If not well designed, the plumbing can get broken in any kind of financial crisis if the shocks are big enough. It doesn’t matter if it’s a subprime mortgage crisis or a eurozone sovereign debt crisis. If you get a big pulse of risk that has to go through the financial system and it can’t make it through one of these pipes or valves without breaking it, then the financial system will no longer function as it’s supposed to and we’ll have recession or possibly worse.\”

I find myself wondering about what an hydraulic model of an economy would look like if it also included bubbles, runs on financial institutions, credit crunches–along with tubes that could break. Sounds messy, and potentially quite interesting.

Options for the Deficit from CBO

The Congressional Budget Office has just published a report called \”Choices for Deficit Reduction.\” To me, the bottom line is that coming to grips with the U.S. fiscal situation in the medium run is going to require going outside everyone\’s current comfort zone.

To set the stage, here are the CBO projections for the debt/GDP ratio in the next couple of decades. As I\’ve discussed before on this blog (for example, here), the CBO is required by law to calculate a \”baseline scenario\” which projects the debt under current law. The difficulty with this approach is that current law can incorporate all kinds of future spending cuts or tax increases that aren\’t actually going to happen when the time comes. Thus, the CBO also calculates an \”alternative fiscal scenario,\” which is based on four assumptions:

  • \”That all expiring tax provisions (other than the recent reduction in the payroll tax for Social Security), including tax provisions that expired at the end of December 2011, are extended;
  • That the parameters of the alternative minimum tax (AMT) are indexed to increase with inflation after 2011 (starting from the 2011 exemption amount);
  • That Medicare’s payment rates for physicians’ services are held constant at their current level; and 
  • That provisions of the Budget Control Act of 2011that established automatic enforcement procedures designed to reduce discretionary and mandatory spending beginning in January 2013 do not go into effect, although the law’s original caps on discretionary appropriations remain in place.\”

The \”alternative fiscal scenario\” provides a useful starting point, because you can then look at how changing each of these four assumptions, along with many other policy changes, would affect the debt/GDP over time.Under the alternative scenario, the debt/GDP is headed for the danger zone of about 90-100% debt/GDP by the early 2020s, and then is headed for unimaginably high levels after that.  (For discussions of why that 90-100% ratio is likely to be important, see earlier posts here, here, and here.)

As one way of getting a handle on what needs to be done, the CBO looks at a variety of possible spending and tax policies, and how they would affect the budget deficit that in the alternative fiscal scenario would be about $1 trillion in 2020. Thus, steps that reduce the projected 2020 deficit by $1 trillion would balance the budget, and get the debt/GDP ratio on a declining path. Steps that reduce the projected 2020 deficit by $500 billion would keep the debt/GDP ratio in 2020 about the same as it is now–but the debt/GDP ratio would start rising after that point. An intermediate set of policies that reduce the 2020 deficit by $750 billion would put the long-term trend of the debt/GDP ratio on a slightly declining path–more-or-less what the baseline scenario looks like.

So what do the possible policy options look like? They come in three tables: possibilities for reducing mandatory spending, possibilities for reducing discretionary spending, and possibilities for increasing taxes. You\’ll notice that many of the recommendations come with various footnotes and caveats, which in turn require heading for the actual report. Some of the recommendations overlap in various ways (like different methods of altering Social Security or Medicare), and so you can\’t just add up all the choices. But for a quick sense of the issue, just glance at the right-hand numbers showing the total projected change in the deficit. If you take the intermediate goal of cutting the projected 2020 deficit by $750 billion, most people will very quickly run out of easily palatable options.

For example, repealing the expansion of health insurance coverage and the \”individual mandate\” to buy health insurance from the Affordable Care Act would reduce the 2020 deficit by about $190 billion, which is real money, but not nearly enough (and would leave tens of millions of Americans again without health insurance). Or letting the the tax cuts enacted in 2001, 2003, and 2009 expire only for couples filing joint tax returns with income over $250,000 per year and for single taxpayers with income over $200,000, along with indexing the alternative minimum tax (AMT) for inflation, reduced the 2020 deficit by $110 billion, which is real money, but not nearly enough to do what is needed. So even if Republicans and Democrats would compromise on eliminating Obama\’s health care plan in exchange for eliminating the Bush tax cuts for those above $200,000 or $250,000 per year, they would have less than half of the $750 billion deficit reduction. To put it another way, Americans and U.S. economy are more drunk on deficit spending, and withdrawal is going to be more painful than most people realize.

Here are the three tables of options:

 

Fetal Origins and Epigenetics: Interview with Janet Currie

Douglas Clement has an insightful interview with Janet Currie in the September 2012 issue of The Region magazine published by the Federal Reserve Bank of Minneapolis.  As Currie says near the start of the interview: “Labor economists think a lot about human capital and investments in it. Traditionally, that’s something to do with education, … [b]ut I’m interested in health as human capital as well, and understanding how health and education intersect. … “It is a broad concept, human capital … Not all these different boxes, but an integrated whole.\” The interview makes a number of interesting points about an array of subjects, including how financial incentives affect the practice of medicine ( no matter what doctors say!), early childhood education, and women in the economics profession.

Here, I\’ll focus on one topic: the fetal origins of inequality. Here\’s the question from Clement, and part of Currie\’s answer:

Region: Could you briefly review the fetal origins hypothesis and how economists have expanded its reach—to test scores, education and income as well as health?

Currie: I think the phrase itself was coined by David Barker, a physician who was interested in whether there was a biological mechanism such that if the fetus was starved in utero it would be more likely to be obese or more likely to have heart disease or diabetes, things related to that in later life.
… An infant programmed in this way would then be more likely to gain a lot of weight later on and to have diseases related to obesity…. I believe Thalidomide was the first thing that really shocked people and showed that if you give drugs to the woman, that it could have an effect on the fetus. People were also working on the Dutch “Hunger Winter” prior to Barker, looking into whether being literally starved in utero had long-term effects.

So economists have taken that idea and run with it. Economic studies are examining a wide range of things that might affect fetal health and asking whether they have long-term consequences. I think there’s pretty broad acceptance now of the idea that all kinds of things that happen when people are in utero seem to have a long-term effect.

One of the things I talked about in my Ely lecture was what mechanism might underlie the long term effects, and I raised the idea of “epigenetic” changes as one possibility. The way I like to think about that is you have the gene, which only changes very slowly when you have mutations. But then kind of on top of the gene you have the epigenome, which determines which parts of the gene are expressed. And that can change within one generation. There are animal experiments that do things like change the diet of guinea pigs and all the baby guinea pigs come out a different color. It can be pretty dramatic. …   The idea is that the fetal period might be particularly important because these epigenetic switches are being set one way or another. And then once they’re set, it’s more difficult to change them later on.

I think we haven’t really been able to look at all of the implications of that given the limitations of the data. We don’t have very much data where we can follow people from, say, in utero to some later period. But, that’s where the frontier is, trying to do that kind of research and make those linkages….

I think a really interesting thing about the fetal origins hypothesis for public policy is that if it’s really important what happens to the fetus, and some people think that maybe the first trimester is the most important or the most vulnerable period, then you’re talking about women who might not even know that they’re pregnant. It really means you should be targeting a whole different population than, say, 15 years ago, when we thought, oh, we need to be targeting preschool kids instead of kids once they reach school age. Now we’re kind of pushing it back. Then it was, “We need to be playing Mozart to infants.” Now the implication is that we’ve got to reach these mothers before they even get pregnant if we really want to improve conditions.

Epigenetics implies that it does not make sense to talk about nature versus nurture. If nature is the gene and nurture is the thing that sets the switches, then the outcome depends on both of those things. So you can’t really talk about nature or nurture in most situations. It has to be some combination of both. …

One thing that is interesting—and I’m starting to do a little bit of work like this myself—is thinking about children in developing countries. Things we’re looking at here in the United States, like the effects of in utero exposure to pollution on child health and economic outcomes, involve problems that are much worse in developing countries. So if we can find an effect here … for instance, my E-ZPass paper suggested that the incidence of low birth weight was 8 percent higher for pregnant women who are subjected to large amounts of auto exhaust because they live near highway toll plazas. If that is true here, then what must be the effect in Beijing? It must be even bigger than that.

Currie, along with Douglas Almond wrote on this topic in the Summer 2011 issue of my own Journal of Economic Perspectives in \”Killing Me Softly: The Fetal Origins Hypothesis\” (25(3): 153-72). That article, like all articles in JEP back to the first issue in 1987, is freely available to all courtesy of the American Economic Association. Another useful starting point to this literature is Currie\’s Ely lecture on this topic (mentioned in passing in her answer above): \”Inequality at Birth: Some Causes and Consequences.\” American Economic Review, 101(3): 1-22. The AER is not freely available on-line, but many in academia will have on-line access through AEA memberships or library subscriptions.

 For yet another recent angle, there\’s an article in a recent Economist magazine on \”Epigenetics and Health: Grandma\’s curse.\” The big question is whether an acquired characteristic–like, say, asthma that is caused by heavy smoking–can be inherited. A plain-vanilla theory of inheritance says this isn\’t possible, because smoking is bad for you, but it doesn\’t alter your genes. However, studies on pregnant rats apparently show that if a first generation of rats is dosed with nicotine, which leads to asthma, then the first generation of offspring also has a propensity to asthma. That result is unsurprising, because it\’s just the fetal origins hypothesis in action. But apparently, the next generation of rats after that also has a greater propensity to asthma, although that generation was never expose to nicotine directly. The epigenetic explanation is that nicotine doesn\’t change genes, but it can alter the \”switches\” that determine what characteristics are expressed by those genes, and those different \”switches\” can be to some extent inherited. As the article writes: [T]hose epigenetic changes that are inherited seem to be subsequently reversible. But the idea that acquired characteristics can be inherited at all is still an important and novel one …\”

Fall 2012 Journal of Economic Perspectives

The Fall 2012 issue of my own Journal of Economic Perspectives is now freely available on-line. Actually, all issues of JEP back to the first issue in 1987 are freely available on-line, compliments of the American Economic Association.

The issue starts with a three-paper symposium on the issue of  \”contingent valuation,\” which is whether it makes sense to use survey techniques to estimate the costs of events like the Exxon Valdez oil spill in 1989 or the BP oil spill in 2010. The symposium has an overview paper laying out the issues, followed by pro and con viewpoints. Next comes a five-paper symposium on various aspects of China\’s economy: labor markets, macroeconomic imbalances, and perspectives on patterns of long-term growth. The final three papers are about the Clark medal awarded to Amy Finkelstein, about Irving Fisher\’s famous 1896 paper, and a \”Recommendations for Further Reading\” that I contribute to each issue.

Symposium on Contingent Valuation

\”From Exxon to BP: Has Some Number Become Better Than No Number?\” by Catherine L. Kling, Daniel J. Phaneuf and Jinhua Zhao

On March 23, 1989, the Exxon Valdez ran aground in Alaska\’s Prince William Sound and released over 250,000 barrels of crude oil, resulting in 1300 miles of oiled shoreline. The Exxon spill ignited a debate about the appropriate compensation for damages suffered, and among economists, a debate concerning the adequacy of methods to value public goods, particularly when the good in question has limited direct use, such as the pristine natural environment of the spill region. The efficacy of stated preference methods generally, and contingent valuation in particular, is no mere academic debate. Billions of dollars are at stake. An influential symposium appearing in this journal in 1994 provided arguments for and against the credibility of these methods, and an extensive research program published in academic journals has continued to this day. This paper assesses what occurred in this academic literature between the Exxon spill and the BP disaster. We will rely on theoretical developments, neoclassical and behavioral paradigms, empirical and experimental evidence, and a clearer elucidation of validity criteria to provide a framework for readers to ponder the question of the validity of contingent valuation and, more generally, stated preference methods.

\”Contingent Valuation: A Practical Alternative When Prices Aren\’t Available,\” by Richard T. Carson
A person may be willing to make an economic tradeoff to assure that a wilderness area or scenic resource is protected even if neither that person nor (perhaps) anyone else will actually visit this area. This tradeoff is commonly labeled \”passive use value.\” Contingent valuation studies ask questions that help to reveal the monetary tradeoff each person would make concerning the value of goods or services. Such surveys are a practical alternative approach for eliciting the value of public goods, including those with passive use considerations. First I discuss the Exxon Valdez oil spill of March 1989, focusing on why it is important to measure monetary tradeoffs for goods where passive use considerations loom large. Although discussions of contingent valuation often focus on whether the method is sufficiently reliable for use in assessing natural resource damages in lawsuits, it is important to remember that most estimates from contingent valuation studies are used in benefit–cost assessments, not natural resource damage assessments. Those working on benefit–cost analysis have long recognized that goods and impacts that cannot be quantified are valued, implicitly, by giving them a limitless value when government regulations preclude certain activities, or giving them a value of zero by leaving certain consequences out of the analysis. Contingent valuation offers a practical alternative for reducing the use of either of these extreme choices. I put forward an affirmative case for contingent valuation and address a number of the concerns that have arisen.
\”Contingent Valuation: From Dubious to Hopeless,\” by Jerry Hausman
Approximately 20 years ago, Peter Diamond and I wrote an article for this journal analyzing contingent valuation methods. At that time Peter\’s view was that contingent valuation was hopeless, while I was dubious but somewhat more optimistic. But 20 years later, after millions of dollars of largely government-funded research, I have concluded that Peter\’s earlier position was correct and that contingent valuation is hopeless. In this paper, I selectively review the contingent valuation literature, focusing on empirical results. I find that three long-standing problems continue to exist: 1) hypothetical response bias that leads contingent valuation to overstatements of value; 2) large differences between willingness to pay and willingness to accept; and 3) the embedding problem which encompasses scope problems. The problems of embedding and scope are likely to be the most intractable. Indeed, I believe that respondents to contingent valuation surveys are often not responding out of stable or well-defined preferences, but are essentially inventing their answers on the fly, in a way which makes the resulting data useless for serious analysis. Finally, I offer a case study of a prominent contingent valuation study done by recognized experts in this approach, a study that should be only minimally affected by these concerns but in which the answers of respondents to the survey are implausible and inconsistent.
Symposium on China\’s Economy

\”The End of Cheap Chinese Labor,\” by Hongbin Li, Lei Li, Binzhen Wu and Yanyan Xiong

In recent decades, cheap labor has played a central role in the Chinese model, which has relied on expanded participation in world trade as a main driver of growth. At the beginning of China\’s economic reforms in 1978, the annual wage of a Chinese urban worker was only $1,004 in U.S. dollars. The Chinese wage was only 3 percent of the average U.S. wage at that time, and it was also significantly lower than the wages in neighboring Asian countries such as the Philippines and Thailand. The Chinese wage was also low relative to productivity. However, wages are now rising in China. In 2010, the annual wage of a Chinese urban worker reached $5,487 in U.S. dollars, which is similar to wages earned by workers in the Philippines and Thailand and significantly higher than those earned by workers in India and Indonesia. China\’s wages also increased faster than productivity since the late 1990s, suggesting that Chinese labor is becoming more expensive in this sense as well. The increase in China\’s wages is not confined to any sector, as wages have increased for both skilled and unskilled workers, for both coastal and inland areas, and for both exporting and nonexporting firms. We benchmark wage growth to productivity growth using both national- and industry-level data, showing that Chinese labor was kept cheap until the late 1990s but the relative cost of labor has increased since then. Finally, we discuss the main forces that are pushing wages up.
\”Labor Market Outcomes and Reforms in China,\” by Xin Meng
Over the past few decades of economic reform, China\’s labor markets have been transformed to an increasingly market-driven system. China has two segregated economies: the rural and urban. Understanding the shifting nature of this divide is probably the key to understanding the most important labor market reform issues of the last decades and the decades ahead. From 1949, the Chinese economy allowed virtually no labor mobility between the rural and urban sectors. Rural-urban segregation was enforced by a household registration system called \”hukou.\” Individuals born in rural areas receive \”agriculture hukou\” while those born in cities are designated as \”nonagricultural hukou.\” In the countryside, employment and income were linked to the commune-based production system. Collectively owned communes provided very basic coverage for health, education, and pensions. In cities, state-assigned life-time employment, centrally determined wages, and a cradle-to-grave social welfare system were implemented. In the late 1970s, China\’s economic reforms began, but the timing and pattern of the changes were quite different across rural and urban labor markets. This paper focuses on employment and wages in the urban labor markets, the interaction between the urban and rural labor markets through migration, and future labor market challenges. Despite the remarkable changes that have occurred, inherited institutional impediments still play an important role in the allocation of labor; the hukou system remains in place, and 72 percent of China\’s population is still identified as rural hukou holders. China must continue to ease its restrictions on rural–urban migration, and must adopt policies to close the widening rural-urban gap in education, or it risks suffering both a shortage of workers in the growing urban areas and a deepening urban-rural economic divide.
\”Understanding China\’s Growth: Past, Present, and Future,\” by Xiaodong Zhu
The pace and scale of China\’s economic transformation have no historical precedent. In 1978, China was one of the poorest countries in the world. The real per capita GDP in China was only one-fortieth of the U.S. level and one-tenth the Brazilian level. Since then, China\’s real per capita GDP has grown at an average rate exceeding 8 percent per year. As a result, China\’s real per capita GDP is now almost one-fifth the U.S. level and at the same level as Brazil. This rapid and sustained improvement in average living standard has occurred in a country with more than 20 percent of the world’s population so that China is now the second-largest economy in the world. I will begin by discussing briefly China\’s historical growth performance from 1800 to 1950. I then present growth accounting results for the period from 1952 to 1978 and the period since 1978, decomposing the sources of growth into capital deepening, labor deepening, and productivity growth. But the main focus of this paper will be to examine the sources of growth since 1978, the year when China started economic reform. Perhaps surprisingly, given China\’s well-documented sky-high rates of saving and investment, I will argue that China’s rapid growth over the last three decades has been driven by productivity growth rather than by capital investment. I also examine the contributions of sector-level productivity growth, and of resource reallocation across sectors and across firms within a sector, to aggregate productivity growth. Overall, gradual and persistent institutional change and policy reforms that have reduced distortions and improved economic incentives are the main reasons for the productivity growth.
\”Aggregate Savings and External Imbalances in China,\” by Dennis Tao Yang
Over the last decade, the internal and external macroeconomic imbalances in China have risen to unprecedented levels. In 2008, China\’s national savings rate soared to over 53 percent of its GDP, whereas its current account surplus exceeded 9 percent of GDP. This paper presents a unified framework for understanding the structural causes of these imbalances. I argue that the imbalances are attributable to a set of policies and institutions embedded in the economy. I propose a unified framework for understanding the joint causes of the high savings rate and external imbalances in China. My explanations first focus on an array of factors that encouraged saving across the corporate, government, and household sectors, such as policies that affected sectoral income distribution, along with factors like incomplete social welfare reforms, and population control policies. I then turn to policies that limited investment in China, thus preventing the high savings from being used domestically. Finally, I will examine how trade policies, such as export tax rebates, special economic zones, and exchange rate policies, strongly promote exports. Moreover, the accession of China to the World Trade Organization has dramatically amplified the effects of these structural distortions. In conclusion, I recommend some policy reforms for rebalancing the Chinese economy.
\”How Did China Take Off?\” by Yasheng Huang
There are two prevailing perspectives on how China took off. One emphasizes the role of globalization—foreign trade and investments and special economic zones; the other emphasizes the role of internal reforms, especially rural reforms. Detailed documentary and quantitative evidence provides strong support for the second hypothesis. To understand how China\’s economy took off requires an accurate and detailed understanding of its rural development, especially rural industry spearheaded by the rise of township and village enterprises. Many China scholars believe that township and village enterprises have a distinct ownership structure—that they are owned and operated by local governments rather than by private entrepreneurs. I will show that township and village enterprises from the inception have been private and that China undertook significant and meaningful financial liberalization at the very start of reforms. Rural private entrepreneurship and financial reforms correlate strongly with some of China\’s best-known achievements—poverty reduction, fast GDP growth driven by personal consumption (rather than by corporate investments and government spending), and an initial decline of income inequality. The conventional view of China scholars is right about one point—that today\’s Chinese financial sector is completely state-controlled. This is because China reversed almost all of its financial liberalization sometime around the early to mid 1990s. This financial reversal, despite its monumental effect on the welfare of hundreds of millions of rural Chinese, is almost completely unknown in the West.

Other Articles

\”Amy Finkelstein: 2012 John Bates Clark Medalist,\” by Jonathan Levin and James Poterba

Amy Finkelstein is the 2012 recipient of the John Bates Clark Medal from the American Economic Association. The core concerns of Amy\’s research program have been insurance markets and health care. She has addressed whether asymmetric information leads to inefficiencies in insurance markets, how large social insurance programs affect healthcare markets, and the determinants of innovation incentives in health care. We describe a number of Amy\’s key research contributions, with particular emphasis on those identified by the Honors and Awards Committee of the American Economic Association in her Clark Medal citation, as well as her broader contributions to the field of economics.
\”Retrospectives: Irving Fisher\’s Appreciation and Interest (1896) and the Fisher Relation,\” by Robert W. Dimand and Rebeca Gomez Betancourt
Irving Fisher\’s monograph Appreciation and Interest (1896) proposed his famous equation showing expected inflation as the difference between nominal interest and real interest rates. In addition, he drew attention to insightful remarks and numerical examples scattered through the earlier literature, and he derived results ranging from the uncovered interest arbitrage parity condition between currencies to the expectations theory of the term structure of interest rates. As J. Bradford DeLong wrote in this journal (Winter 2000), \”The story of 20th century macroeconomics begins with Irving Fisher\” and specifically with Appreciation and Interest because \”the transformation of the quantity theory of money into a tool for making quantitative analyses and predictions of the price level, inflation, and interest rates was the creation of Irving Fisher.\” I discuss the message of Appreciation and Interest, and assess how original he was.
\”Recommendations for Further Reading,\” by Timothy Taylor

Should Voting be Compulsory?

Just to put my cards face up on the table right here at the start, I\’m not in favor of compulsory voting. But I think the case for doing so is stronger than commonly recognized. Let me lay out the arguments as I see them:  low turnover, what the penalties look like in some other countries for not voting,  the free speech/constitutional issues, and whether any resulting differences in outcomes would be desirable.

The starting point for making it compulsory to vote begins with the (arguable) notion that democracy would be better-served if participation in elections was higher. Here\’s a figure from a post of mine a couple of months ago on \”Voter Turnout Since 1964.\”  With some variation across age groups, voter turnout in presidential elections has been sagging over the last few decades.

Some nations have responded to concerns over low voter turnout by passing laws that make it a requirement to vote. Here\’s a list of countries with such laws, and the penalties that they impose for not voting, taken from a June 2006 report from Britain\’s Electoral Commission. The penalties are categorized from  \”Very Strict\” to \”None.\”   But honestly, even the \”Very Strict\” is not especially onerous.


 In talking with people on this subject, I\’ve found that one immediate response is that that compulsory voting must be a violation of freedom or free speech in some way. I have some of this reaction myself. But while one may reasonably oppose the idea of compulsory voting, the case that it violates a specific law or constitutional right is difficult to make. Indeed, the original 1777 constitution of the state of Georgia specifically called for a potential penalty of five pounds for not voting–although it also allowed an exception for those with a good explanation. If the U.S. government can require you to pay money for taxes, or compel you to serve on jury duty, or institute a military draft, it probably has the power to require that you show up and vote. Of course, a compulsory voting law would almost certainly include provisions for conscientious objectors to voting, and you would be permitted to turn in a totally blank ballot if you wish.  The penalties for not voting would be an inconvenience, but far from draconian.

For a review of the various legal and constitutional ins and outs of compulsory voting, along with some of the practical arguments, I recommend this anonymous 2007 note in the Harvard Law Review, called \”The Case for Compulsory Voting.\”

The author points out (footnotes omitted): \”Approximately twenty-four nations have some kind of compulsory voting law, representing 17% of the world’s democratic nations. The effect of compulsory voting laws on voter turnout is substantial. Multivariate statistical analyses have shown that compulsory voting laws raise voter turnout by seven to sixteen percentage points.\”

The anonymous author also offers what seem to me ultimately the two strongest arguments for compulsory voting. The first argument is that a larger turnout will (arguably) provide a more accurate representation of what the public wants, and in that sense will strengthen the bond between the electorate and its elected representatives. The second and more subtle argument is that compulsory voting would mean that political parties could focus much less on voter turnout. Less money and effort could go into turning out the vote, and more into persuasion.  Those who now vote almost certainly have stronger partisan feelings, on average, than those who don\’t vote. So politicians aim their advertisements and strategies at that more partisan group. Many negative campaign ads attempt to reduce turnout for a candidate: if turnout was high, the usefulness of such negative ads could be diminished. A broader spectrum of voters would push candidates to offer a broader spectrum of messages to appeal to those voters, and groups that now have low turnout would find themselves equally courted by politicians.

The question becomes whether these potential benefits to the democracy as a whole are worth the imposition of compulsory voting. The anonymous writer in the Harvard Law Review offers what is surely meant to be an attention-grabbing and paradoxical-sounding conclusion: \”Although there are several legal obstacles to compulsory voting, none of them appear to be substantial enough to bar compulsory voting laws. … The biggest obstacle to compulsory voting is the political reality that compulsory voting seems incompatible with many Americans’ notions of individual liberty. As with many other civic duties, however, voting is too important to be left to personal choice.\”

How might one respond to these arguments? Perhaps the most obvious answer is that if one looks at the countries that have compulsory voting–say, Brazil, Australia, Peru, Thailand–it\’s not obvious that their politics are characterized by greater appeals to the nonpartisan middle, or that the bond between the population and its elected representatives is especially strong.

For a more detailed deconstruction , I recommend a 2009 essay by Annabelle Lever in Public Reason magazine, \”Is Compulsory Voting Justified?\”  Basically, her argument comes down to a belief that the potential gains from compulsory voting are unproven and unsupported by evidence in countries that have tried it, while the lost freedom from compulsory voting would be definite and real.

In Lever\’s view, the evidence that exists doesn\’t show that political parties start competing for the middle in a different way, nor that outcomes are different. For example, northern European social democratic countries like Sweden don\’t have compulsory voting, and do have declining voter turnout.
 f people are disinterested or disillusions and don\’t want to vote for the existing candidates, it\’s not clear that threatening them with a criminal offense for not voting will build connections from the population to elected representatives. If political parties don\’t need to focus on turnout, they will immediately turn to other ways of identifying swing groups and wedge issues.  The penalties for not voting may not look large in some broad sense, but be clear: when we enter the realm of compulsory voting, we are talking about criminal behavior. will need to decide how large the fines or other penalties will be, and what happens to those (and there will be some!) who refuse to pay. If not voting is a crime, we will be making a lot of people into criminals–maybe guilty of only a minor crime, but still recorded in our information-technology society as breaking the law. It is by no means clear that having a right to vote should be reinterpreted as having a legal duty to vote: there are many rights that one may choose to exercise, or not, as one prefers. In a free society, the right to be left alone has some value, too. Lever concludes:

\”I have argued that the case for compulsory voting is unproven. It is unproven because the claim that compulsion will have beneficial results rests on speculation about the way that nonvoters will vote if they are forced to vote, and there is considerable, and justified, controversy on this matter. Nor is it clear that compulsory voting is well-suited to combating those forms of low and unequal turnout that are, genuinely, troubling. On the contrary, it may make them worse by distracting politicians and voters from the task of combating persistent, damaging, and pervasive forms of unfreedom and inequality in our societies.

\”Moreover, I have argued, the idea that compulsory voting violates no significant rights or liberties is mistaken and is at odds with democratic ideas about the proper distribution of power and responsibility in a society. It is also at odds with concern for the politically inexperienced and alienated, which itself motivates the case for compulsion. Rights to abstain, to withhold assent, to refrain from making a statement, or from participating, may not be very glamorous, but can be nonetheless important for that. They are necessary to protect people from paternalist and authoritarian government, and from efforts to enlist them in the service of ideals that they do not share. Rights of non-participation, no less than rights of anonymous participation, enable the weak, timid and unpopular to protest in ways that feel safe and that are consistent with their sense of duty, as well as self-interest. … People must, therefore, have rights to limit their participation in politics and, at the limit, to abstain, not simply because such rights can be crucial to prevent coercion by neighbours, family, employers or the state, but because they are necessary for people to decide what they are entitled to do, what they have a duty to do, and how best to act on their respective duties and rights.\”

I don\’t know of any recent polls on how Americans feel about compulsory voting, but a 2004 poll by ABC News found 72% opposed–a slightly higher percentage than a poll taken 40 years earlier on the same subject. These kinds of results from nationally representative polls pose an additional level of irony. If Americans as a group are strongly opposed to laws that would require compulsory voting, it seems problematic to glide around this opposition into an argument that, really, although they don\’t know it yet, they would feel better off with compulsory voting.

In a 2004 essay on compulsory voting (in this volume), Maria Gratschew points out that a number of countries in western Europe that used to have compulsory voting have have moved away from it in recent decades: Austria, Italy, Greece, and Netherlands. In discussing the decision by Netherlands to drop its compulsory voting laws in 1967, Gratschew writes: \”A number of theoretical as well as practical arguments were put forward by the committee: for example, the right to vote is each citizen\’s individual right which he or she should be free to exercise or not; it is difficult to enforce sanctions against non-voters effectively; and party politics might be livelier if the parties had to attract the voters\’ attention, so that voter turnout would therefore reflect actual participation and interest in politics.\”

Compulsory voting is one of those intriguing roads that looks better when not actually traveled.

Minimum Wage to $9.50? $9.80? $10?

During the 2008 campaign, President Obama promised to raise the minimum wage to $9.50/hour by 2011. This pledge was made at a time when the economic slowdown was already underway: the recession started in December 2007. The pledge was also made at a time when an increase in the minimum wage was already underway: in May 2007, President Bush has signed into law an increase in the minimum wage, to rise in several stages from $5.15 to $7.25 in July 2009.

Last summer, some Democratic Congressmen tried to push the issue a bit. In June, 17 House Democrats signed on as co-sponsors of a bill authored by Rep. Jesse Jackson Jr. of Illinois for an immediate rise in the minimum wage to $10/hour–and then to index it to inflation in the future. In July, over 100 Democrats in the House of Representatives signed on as co-sponsors of a bill authored by Rep. George Miller of California to raise the federal minimum wage to $9.80/hour over the next three years–and then to index it to inflation after that point. But while raising the minimum wage was a hot issue in the years before Bush signed the most recent increases into law, these calls for a still-higher minimum wage got little attention.

For background, here are a couple of graphs about the U.S. minimum wage. The first graph shows the nominal minimum wage over time, and also the real minimum wage adjusted to 2011 dollars. In real terms, the increase in the minimum wage from 2007 to 2009 didn\’t quite get it back to the peak levels of the late 1960s, but did return it to the levels of the early 1960s and most of the 1970s–as well as above the levels that prevailed during much of the 1980s and 1990s. The second graph shows the minimum wage as a share of the median wage for several countries, using OECD data. The U.S. has the lowest ratio of minimum wage/median wage–and given the greater inequality of the U.S. income distribution, the U.S. ratio would look lower still if compared to average wages. However, because of the rise from 2007-2009, the U.S. economy has experienced the largest rise in the minimum wage from 2006-2011. (Thanks to Danlu Hu for producing these graphs.)

So why didn\’t calls for a higher minimum wage in summer 2012 get more political traction? 

1) The unemployment rate in May 2007 was 4.4%, and had been below 5% for 18 months. The unemployment rate last summer was around 8.2%, and had  been above 8% for more than 40 months. Thus, there was a lot less reason in May 2007 to worry about the risk that a higher minimum wage might reduce the number of jobs for unskilled labor than their was in summer 2012.

2) In summer 2012, average wage increases had not been looking good for most workers for several years, which made raising the minimum wage seem less appealing as a matter of fairness.

3) The increase in the minimum wage that President Bush signed into law that took effect from 2007 to 2009 made it feel less urgent to raise the minimum wage still further.

4) Some states have set their own minimum wages, at a level above the U.S. minimum wage. The U.S. Department of Labor has a list of state minimum wage laws here: for example, California has a minimum wage of $8/hour and Illinois has a minimum wage of $8.25/hour. Thus, at least some of those jurisdictions who favor a higher minimum wage are getting to have it.

5) In summer 2012, the Democratic establishment was focused on re-electing President Obama, and since raising the minimum wage was not part of his active agenda, it gave no publicity or support to the calls for a higher minimum wage.

6) In the academic world, there was a knock-down, drag-out scrum about the minimum wage going through much of the 1990s. David Card and Alan Krueger published a much-cited paper in 1994 in the American Economic Review, comparing minimum wage workers in New Jersey and Pennsylvania, and found that the different minimum wages across states had no effect on employment levels. (“Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania.”American Economic Review, September 1994, 84(4), pp. 772–93.). This conclusion was heavily disputed, and for those who want to get their hands dirty, the December 2000 issue of the American Economic Review had 30+ pages of critique of the Card-Krueger paper and 30+ pages of response. I won\’t seek to mediate that dispute here. But I think that the academics who were driving the arguments had sort of exhausted themselves by the time the 2007 legislation passed, and no one seemed to be slavering for a rematch.

I was lukewarm on the rise in the minimum wage that was enacted in 2007. It seems to me that there are better ways to help low-wage workers. But that said, if the minimum wage isn\’t very far above the market wage for unskilled labor (and in some places may even be below the market wage for unskilled labor), there\’s no reason to believe that it will have large effects on employment. However, raising the minimum wage further to the range of $9.50/hour or $10/hour would in many parts of the country push well above the prevailing wage for unskilled labor, especially in a still-weak economy, and so the effects on employment would be more deleterious.

I tried to explain some of the other policy issues raised by a higher minimum wage in my book The Instant Economist: Everything You Need to Know About How the Economy Works, published earlier this year by Penguin Plume.

\”Here’s an insight for opponents of a higher minimum wage to mull over: Let’s say a 20 percent rise in the minimum wage leads to 4 percent fewer jobs for low-skilled workers (as some of the evidence suggests). But this also implies that a higher minimum wage leads to a pay raise for 96 percent of low-skilled workers. Many people in low-skill jobs don’t have full-time, year-round jobs. So perhaps these workers work 4 percent fewer hours in a year, but they get 20 percent higher pay for the hours they do work. In this scenario, even if the minimum wage reduces the number of jobs or the number of hours available, raising it could still make the vast majority of low-skilled workers better off, as they’d work fewer hours at a higher wage.
\”There’s another side to the argument, however. The short-term costs to an individual of not being able to find a job are quite large, while the benefits of slightly higher wages are (relatively speaking) somewhat smaller, so the costs to the few who can’t find jobs because of a higher minimum wage may be in some sense more severe than the smaller benefits to individuals who are paid more. Those costs of higher unemployment are also unlikely to be spread evenly across the economy; instead, they are likely to be concentrated in communities that are already economically disadvantaged. Also, low-skill jobs are often entry-level jobs. If low-skill jobs become less available, the bottom rung on the employment ladder becomes less available to low-skilled workers. Thus, higher minimum wages might offer modest gains to the substantial number of low-skilled workers who get jobs, but impose substantial economic injury on those who can’t.
\”There are alternatives to price floors, and economists often tend to favor such alternatives because they work with the forces of supply and demand. For example, if a government wants to boost wages for low-skilled workers, it could invest in skills-training programs. This would enable some of those workers to move into more skills-driven (and better paying) positions and would lower the supply of low-skilled labor, driving up their wages as well. The government could subsidize firms that hire low-skilled workers, enabling the firms to pay them a higher wage. Or it could subsidize the wages of low-skilled workers directly through programs such as the Earned Income Tax Credit, which provides a tax break to workers whose income is below a certain threshold. This policy increases the workers’ net income without placing any financial burden on the employers.\”

What I didn\’t point out in the book is the political dynamic that raising the minimum wage allows politicians to pretend that they are helping people at zero cost–because the costs don\’t appear as taxes and spending. But pushing up the minimum wage substantially now, after the recent increases and in a still-struggling economy, does not strike me as wise policy.

Addendum: Thanks to reader L.S. who let me know that my argument here–a minimum wage law can play a useful redistribution function under certain labor market assumptions, but in general it is better for the government to move to a lower minimum wage and higher government support for low-wage workers–is quite similar to the more formal case made by David Lee and Emmanuel Saez in their recent Journal of Public Economics article, \”Optimal minimum wage policy in competitive labor markets.\”

Economics and Natural Disasters

In the aftermath of Hurricane Sandy, many teachers and students of economics will find themselves searching for background materials that provide some background in the economics of natural disasters. Here are a few examples from the last few years.

David Stromberg laid out the economic arguments about natural disasters in \”Natural Disasters, Economic Development, and Humanitarian Aid,\” appearing in the Summer 2007 issue of my own Journal of Economic Perspectives. (This article, like all JEP articles back to the start of the journal 1987, is freely available to all courtesy of the American Economic Association.) Stromberg makes the fundamental point that the economic analysis of natural disasters is built on three factors: the incidence of the natural disasters themselves, the number of people exposed to the disaster, and the vulnerability of the population to that disaster. In fact, Stromberg traces this distinction back to letters between Voltaire and Rousseau in the aftermath of the great Lisbon earthquake of 1755. Voltaire had written a poem on how terrible the earthquake was; Rousseau had responded by pointing out that it was not the quake, but the interaction between human society and the quake, which was at issue. Here\’s Stromberg (footnotes and citations omitted):

\”[I]n 1755 an earthquake devastated Lisbon, which was then Europe’s fourth-largest city. At the first quake, fissures five meters wide appeared in the city center. The waves of the subsequent tsunami engulfed the harbor and downtown. Fires raged for days in areas unaffected by the tsunami. An estimated 60,000 people were killed, out of a Lisbon population of 275,000. In a letter to Voltaire dated August 18, 1756, Jean-Jacques Rousseau notes that while the earthquake was an act of nature, previous acts of men, like housing construction and urban residence patterns, set the stage for the high death toll. Rousseau wrote: “Without departing from your subject of Lisbon, admit, for example, that nature did not construct twenty thousand houses of six to seven stories there, and that if the inhabitants of this great city had been more equally spread out and more lightly lodged, the damage would have been much less and perhaps of no account.”

\”Following Rousseau’s line of thought, disaster risk analysts distinguish three factors contributing to a disaster: the triggering natural hazard event (such as the earthquake striking in the Atlantic Ocean outside Portugal); the population exposed to the event (such as the 275,000 citizens of Lisbon); and the vulnerability of that population (higher for the people in seven-story buildings).\”

Of course, this insight implies that what events are classified as a \”natural disaster\” is not just about size of the natural event, but about how many people are affected. Thus, the WHO Collaborating Centre for Research on the Epidemiology of Disasters (CRED) maintains an Emergency Events DatabaseCenter that colleges data on natural disasters, where a disaster is defined as 10 or more people reported killed, 100 or more people reported affected, a declaration of a state of emergency, or a call for international assistance. Here are some of their figures showing global trends in natural disasters from 1975 through 2011. The first graph shows the number of such disasters over time: the total was rising into the early 2000s, but has leveled off since then.

The second and third graph show the number of people killed, and the number of people affected by such disasters. The trendline for number of people killed has been dropping over time, with occasional spikes: in 2010,  the earthquake in Haiti, or in 2007, the cyclone that hit Myanmar and the major earthquake in China. However, the number of people affected by natural disasters is rising over time, which one would expect as a result of growing population levels, if nothing else.

Finally, the fourth graph shows monetary losses from natural disasters. Of course, this graph is driven by whether the disasters hit high-income or middle-income countries, where the measured economic costs of damage are higher than in low-income countries.

The best way of dealing with natural disasters is often before they occur: early warning systems, advance planning, encouraging natural protections like minimizing deforestation or protecting wetlands, building codes, flood control, and more. For a nice overview of such efforts around the world, I recommend the 2010 report on \”Natural Hazards, UnNatural Disasters: The Economics of Effective Prevention,\” from the World Bank. The report begins: \”The adjective “UnNatural” in the title of this report conveys its key message: earthquakes, droughts, floods, and storms are natural hazards, but the unnatural disasters are deaths and damages that result from human acts of omission and commission. Every disaster is unique, but each exposes actions—by individuals and governments at different levels—that, had they been different, would have resulted in fewer deaths and less damage. Prevention is possible, and this report examines what it takes to do this cost-effectively.\”

The World Bank report is focused more on low-income countries, but similar lessons about prevention apply to high-income countries, as well. In the New York Times on Tuesday, David W. Chen and Mireya Navarro discuss \”For Years, Warnings That It Could Happen Here.\” They talk about proposals that have been floating around the New York metro area for years now about levy systems, storm surge barriers, floodgates in subways,  moving people and economic activity away from low-lying areas, and in general having plans in place. Not many storms will pack the wallop of Hurricane Sandy, but New York City is a huge agglomeration of people living on a coastline who will inevitably be susceptible to storm and flood damage.

Two other quick references: First, the National Flood Insurance Program will almost certainly not have the money to pay for the damage from Hurricane Sandy. For background on that program and how it works, and why its inability to fund these damages was completely predictable, Erwann O.  Michel-Kerjan, lays it out in \”Catastrophe Economics: The National Flood Insurance Program,\” in the Fall 2010 issue of my own Journal of Economic Perspectives. 

Second, the aftermath of natural disasters is often a motivation for teachers of economics to discuss the extent to which, even if price restrictions don\’t make economic sense most of the time, they might be justifiable in the aftermath of a natural disaster to prevent \”price-gouging.\” Michael Giberson of Texas Tech University a nice readable essay on \”The Problem with Price Gouging Laws\” in the Spring 2011 issue of Regulation magazine. I blogged about the article here. He points out that 31 states have such laws, and that the completely predictable problems with such laws are that they
discourage bringing supplies into disaster areas, they discourage conserving on key resources, they concentrate economic losses on local merchants, and they worsen the economic losses in the disaster area.

72 is the New 30?!!

Everyone knows that human life expectancies have been improving. But just how extraordinary and incomparable that improvement has been is not widely understood. Demographers Oskar Burgera, Annette Baudischa, and James W. Vaupel offer two remarkable sets of comparisons in \”Human mortality improvement in evolutionary context,\” which appears in a recent issue of the Proceedings of the Natural Academy of Sciences (October 30, 2012, vol. 109, no. 44, 18210-18214). From their abstract:

\”The health and economic implications of mortality reduction have been given substantial attention, but the observed malleability of human mortality has not been placed in a broad evolutionary context. We quantify the rate and amount of mortality reduction by comparing a variety of human populations to the evolved human mortality profile, here estimated as the average mortality pattern for ethnographically observed hunter-gatherers. We show that human mortality has decreased so substantially that the difference between hunter-gatherers and today’s lowest mortality populations is greater than the difference between hunter-gatherers and wild chimpanzees. The bulk of this mortality reduction has occurred since 1900 and has been experienced by only about 4 of the roughly 8,000 human generations that have ever lived. Moreover, mortality improvement in humans is on par with or greater than the reductions in mortality in other species achieved by laboratory selection experiments and endocrine pathway mutations.\”

 Their first main set of comparisons is to look at human mortality declines in a very long-run evolutionary context: from hunter-gatherers to modern humans. They focus to some extent on modern Sweden and Japan as examples of the highest life expectancies for modern humans (in what follows \”y\” is the writers\’ abbreviation for \”years, and footnotes and references to figures are omitted).

\”That is, Swedes in 1900 had mortality profiles closer to hunter-gatherers than to the Swedes of today. This relative difference between Swedes recently and those 100 y ago has emerged in a rapid revolutionary leap, as this distance is far greater than that between hunter-gatherers and chimps. The recent jumps in mortality reduction are remarkable in the context of mammal diversity because age-specific death rates for hunter-gatherers are already exceptionally low, probably among the lowest of any nonhuman primate or terrestrial mammal (especially if body size is controlled for), and lower than even captive chimpanzees at all ages. The human mortality profile, however, is so plastic that over the past century the populations doing best managed to achieve very large reductions in death rates that were already low compared with those of other species. …

\”For example, hunter-gatherers at age 30 have the same probability of death as present-day Japanese at the age of 72: hence the age of a person in Japan that is equivalent to a 30-y-old hunter-gatherer is 72. In other words, compared with the evolutionary pattern, 72 is the new 30. …

\”In gross comparative terms, this means that during evolution from a chimp-like ancestor to anatomically modern humans, mortality levels once typical of prime-of-life individuals were pushed back to later ages at the rate of a decade every 1.3millions years, but the mortality levels typical of a 15-y-old in 1900 became typical of individuals a decade older about every 30 y since 1900.\”

Their second main set of comparisons is to look at the gain in human mortality compared with the gains in mortality achieved in laboratory situations, by manipulating the genetics and the environment of fruit flies, nematode worms, mice, and the like.

\”Fruit fly selection experiments achieve significant extensions in life span by rearing successive generations from eggs laid by old individuals. In one classic example, mean life span increased by about 30% in 15 generations , for a rate of change of almost 2% per generation, and in another by about 100% in 13 generations, or just over 5% per generation. For human hunter-gatherers, mean life span at birth is about 31 …  For Swedes, it was about 32 in 1800, 52 in 1900, and is 82 today. So life expectancy increased by about 165% from hunter-gatherers to modern Swedes and at a rate of about 12% per generation since 1800.

\”Some of the most promising directions for understanding the physiological mechanisms of aging come from experiments with mutations that affect the endocrine pathway. These impressive experiments have extended mean life span in nematode worms by >100%, fruit flies by ∼85% , and laboratory mice by ∼50%. Dietary restriction, which involves suppressing caloric intake of an organism, has extended life span in nematodes by 100–200%, fruit flies by ∼100%, and mice by ∼50%. Hence recent human mortality improvement is often greater than that achieved by manipulated strains of model organisms relative to the wild type, especially when single mutations or
physiological pathways are manipulated. However, experiments that simultaneously manipulate multiple pathways in organisms such as yeast and nematode worms can achieve much greater life span extensions. The majority of laboratory studies where mammals are the model organism have been done on mice and yield percentage life span increases less than those gained by humans.\”

It\’s unclear just what the recent changes in human life expectancy mean for the long run, because they are so without parallel either in the evolutionary record or in the lab. It seems unlikely that the huge gains in human life expectancy since 1900 or so can be related to large changes in genetics or physiological processes: not enough generations have passed. As the authors ask: \”Why does the human genome give humans a license to drastically reduce mortality by nongenetic change?\” The answer is not yet clear, but what is clear is that there is a \”biologically unique\” plasticity in the human mortality decline that has already occurred.

Driverless Cars

Automobile travel transformed how people relate to distance: it decentralized how people live and work, and gave them a new array of choices for everything from the Friday night date to the long-distance road trip. I occasionally marvel that we can take our family of five, with all our gear, door-to-door for a getaway to a YMCA family camp 250 miles away in northern Minnesota–all for the marginal cost of less than a tank of gas. Driverless cars may turn out to be one of those rare inventions that transform transportation even further. KPMG and the nonprofit Center for Automotive Research published a report on \”Self-driving cars: The next revolution.\” last August. It\’s available from the KPMG website here, and from the CAR website here. I missed the report when it first came out, but then saw this story about it in a recent issue of the Economist magazine.

Many people have heard about the self-driving cars run by Google that have already driven over 200,000 miles on public roads. The report makes clear that automakers are taking this technology very seriously as well, and developing the range of sensor-based and connected-vehicle technologies that would be needed to make this work. Examples of the technology include the Light Detection and Ranging (LIDAR) equipment that does 360-degree sensing around a car. The LIDAR systems that Google retrofitted into cars costs about $70,000 per car. Dedicated Short-Range Communication (DSRC) has certain standards and a designated frequency for short-range communication, and can thus be focuses on vehicle-to-vehicle and vehicle-to-infrastructure communication. Ultimately, these would be tied together so that self-driving cars could travel closely together in \”platoons\” and minimize traffic congestion. And it\’s not just Google and the car companies. Intel Capital, for example, recently launched at $100 million Connected Car Fund.   

 Of course, it is always possible that driverless cars will run up against insurmountable barriers. But skeptics should remember that the original idea of the automobile looked pretty dicey as well. Millions of adults will be personal driving motorized vehicles at potentially high speeds? They will drive on a network of publicly provided roads that will reach all over the country? They will fill these vehicles with flammable fuel that will be dispensed by tens of thousands of small stores all over the country? If the social gains seem large enough, technologies often have a way of emerging. With driverless cars, what are some of the gains? Quotations are from the report: as usual, footnotes are omitted for readability.

Costs of Traffic Accidents
Self-driving cars have the potential to save tens of thousands of lives, and prevent hundreds of thousands of injuries, every year.  \”In 2010, there were approximately six million vehicle crashes leading to 32,788 traffic deaths, or approximately 15 deaths per 100,000 people. Vehicle crashes are the leading cause of death for Americans aged 4–34. And of the 6 million crashes, 93 percent are attributable to human error. … More than 2.3 million adult drivers and passengers were treated in U.S. emergency rooms in 2009. According to research from the American Automobile Association (AAA), traffic crashes cost Americans $299.5 billion annually.\” Moreover, an enormous reduction in crash risk would allow a redesign of cars to be much lighter.

Costs of Infrastructure

Driverless cars will allow many more cars to use a highway simultaneously. \”An essential implication for an autonomous vehicle infrastructure is that, because efficiency will improve so dramatically, traffic capacity will increase exponentially without building additional lanes or roadways. Research indicates that platooning of vehicles could increase highway lane capacity by up to 500 percent. It
may even be possible to convert existing vehicle infrastructure to bicycle or pedestrian uses. Autonomous transportation infrastructure could bring an end to the congested streets and extra-wide highways of large urban areas.\”

They could reduce the cost of design of highways. \”[T]oday’s roadways and supporting infrastructure must accommodate for the imprecise and often-unpredictable movement patterns of human-driven vehicles with extra-wide lanes, guardrails, stop signs, wide shoulders, rumble strips and other features
not required for self-driving, crashless vehicles. Without those accommodations, the United States could significantly reduce the more than $75 billion it spends annually on roads, highways, bridges, and other infrastructure.\”

Driverless cars will alter the need for parking.  Imagine that your car will drop you at your office door, head off to park itself, and come back when you call it.  \”In his book ReThinking a Lot (2012), Eran Ben-Joseph notes, “In some U.S. cities, parking lots cover more than a third of the land area, becoming the single most salient landscape feature of our built environment.”\”

Costs of Time
Driverless cars might be faster, but in addition, they open up the possibility of using travel time for work or relaxation. Your car could become a rolling office, or a place for watching movies, or a place for a nap. \”An automated transportation system could not only eliminate most urban congestion, but it would also allow travelers to make productive use of travel time. In 2010, an estimated 86.3 percent of all workers 16 years of age and older commuted to work in a car, truck, or van, and 88.8 percent of those drove alone … The average commute time in the United States is about 25 minutes.
Thus, on average, approximately 80 percent of the U.S. workforce loses 50 minutes of potential productivity every workday.  With convergence, all or part of this time is recoverable. Self-driving vehicles may be customized to serve the needs of the traveler, for example as mobile offices, sleep pods, or entertainment centers.\” I find myself imagining the overnight road-trip, where instead of driving all day, you sleep in the car and awake at your destination. 
 

Costs of Energy
The combination of much lighter cars, being driven much more efficiently, could dramatically reduce energy use. Lighter cars use less fuel: \”Vehicles could also be significantly lighter and more energy
efficient than their human-operated counterparts as they no longer need all the heavy safety features, such as reinforced steel bodies, crumple zones, and airbags. (A 20 percent reduction in weight corresponds to a 20 percent increase in efficiency.)\”  \”Platooning alone, which would reduce the effective drag coefficient on following vehicles, could reduce highway fuel use by up to 20 percent…\” \”According to a report published by the MIT Media Lab, “In congested urban areas, about 40 percent of total gasoline use is in cars looking for parking.\”

 Costs of Car Ownership
Most cars are unused for 22 hours out of every day. I already know people in cities like New York who own a car, but keep it in storage for out-of-city trips. I know people who use companies like ZipCar, a membership-based service that lets you have a car for a few hours when you need it. Driverless cars may offer a replacement for car ownership. Need a car? A few taps on your smart-phone and one will come to meet you, and take you where you want to be. The price will of course be lower if you don\’t mind being picked up in an automated carpool. 

Mobility for the Young and the Old
Imagine being an elderly person who has become uncomfortable with driving, at least at certain times or under certain conditions. Driverless cars would offer continues mobility. Imagine being able to put your teenager in a car and have them safely delivered to their destination. Imagine always having a safe ride home after a night on the town.

How Fast?
The fully self-driving car isn\’t right around the corner. Clearly, costs need to come down substantially and a number of complementary technologies need to be created. However, we do already have cars in the commercial market with cruise control and anti-lock brakes, as well as cars that sense potential crash hazards and can parallel park themselves. Changes like these happen slowly, and then in a rush. As the report notes, \”The adoption of most new technologies proceeds along an S-curve, and we believe the path to self-driving vehicles will follow a similar trajectory.\” Maybe 10-15 years? Faster?

How Retirement Age Tracks Social Security\’s Rules

Back in 1983, as one of the steps taken to bolster the long-run finances of the Social Security System, was to phase in a rise in the \”normal\” or \”full\” retirement age. The normal retirement age for receiving full Social Security benefits had been 65, with \”early retirement\” with lower benefits possible at age 62. Under the new rules, the normal retirement age remained 65 for those born in 1937 or earlier–and thus turning 65 before 2002. It then phased up by 2 months per year, so that for those born six years later in 1943 or after, the normal retirement age is now 66. Written into law is a follow-up increase where a rise in the normal retirement age from 66 to 67 will be phased in, again at a rate of two months per year,  for those born from 1955 to 1960. 

How has this change altered actual retirement patterns? What are the reasons, either for retirees or for the finances of Social Security, to encourage still-later retirement?

Economists have long recognized that what a government designates as the \”normal\” retirement age has a big effect on when people actually choose to retire. Luc Behaghel and David M. Blau present some of the recent evidence in \”Framing Social Security Reform: Behavioral Responses
to Changes in the Full Retirement Age,\” which appears in the November 2012 American Economic Journal: Economic Policy (4(4): 41–67). (The journal isn\’t freely available on-line, but many in academia will have access through a library subscription.)

Consider the following graphs from Behaghel and Blau. Each one is for those born in a different year, from 1937 up through 1942, as the normal retirement age phased up. These people are the ones hitting the normal retirement age of 65 in the early and mid-2000s. The solid line shows the probability of retirement at each age. The early retirement age of 62 is marked with a vertical red line; the previous normal retirement age of 65 is marked with a vertical red line; and the actual retirement age for that year as it phases up two months per year is marked with a vertical red line. The dashed line, which is the same in all the figures, shows for comparison the retirement pattern for those born over the 1931-1936 period.

The main striking pattern is that the probability of retiring at a certain age almost exactly tracks the changes in the normal retirement age: that is, the solid line spikes at the red vertical line showing the normal retirement age. There is also a spike at the early retirement age of 62. Here are the patterns.

The evidence here seems clear: People are making their retirement choices in synch with the government-set normal retirement age. This pattern isn\’t new, as the authors point out, a spike in retirement age at 65 became visible in the data back in the early 1940s, about five years after Social Security became law. Still, the obvious question (for an economist) is why people would make this choice. If you retire later than the normal retirement age, your monthly benefits are scaled up, so from the viewpoint of overall expected lifetime payments, you don\’t gain from retiring earlier. A number of possible explanations have been proposed: 1) people don\’t have other sources of income and need to take the retirement benefits as soon as possible for current income; 2) people are myopic, or don\’t recognize that their monthly benefits would be higher if they delayed retirement; 3) many people are waiting until age 65 to retire so that they can move from their employer health insurance to Medicare; 4) some company retirement plans encourage retiring at age 65.

However, none of these explanations give an obvious reason for why the retirement age would exactly track the changes in Social Security normal retirement age, so it seems as if a final \”behavioral\” explanation is that the \”normal\” retirement age announced by the government, whatever it is, is then treated by many people as a recommendation that should be taken. Choosing a retirement date in this way is probably suboptimal both for individuals and for the finances of the Social Security system.

From the standpoint of individuals, there\’s a widespread sense among economists that many retirees would benefit from having more of their wealth in annuities–that is, an amount that would pay out no matter how long they live. In the Fall 2011 issue of my own Journal of Economic Perspectives, 
Shlomo Benartzi, Alessandro Previtero, and Richard H. Thaler have an article on \”Annuitization Puzzles,\” which makes the point that when you delay receiving Social Security, you are in effect buying an annuity: that is, you are taking less in the present–which is similar to \”paying\” for the annuity– in exchange for a larger long-term payment in the future. They write: \”[T]he easiest way to increase the amount of annuity income that families have is to delay the age at which people start claiming Social Security benefits. Participants are first eligible to start claiming benefits at age 62, but by waiting to begin, the monthly payments increase in an actuarially fair manner until age 70. \”

They further argue that a good starting point to encouraging such behavior would be to re-frame the way in which the Social Security Administration, and all the rest of us, talk about Social Security benefits. Imagine that, with no change at all in the current law, we all started talking about a \”standard retirement age\” of 70. We pointed out that you can retire earlier, but if you do, monthly benefits will be lower. If the choice of when to retire was framed in this way, my strong suspicion is that many more people would react differently than when we announce that the \”normal retirement age\” is 66, and if you wait then your monthly benefits will be higher. Again, people seem to react to what the government designates as the target for retirement age.

However, this labeling change might encourage people to work longer, but it would not affect the solvency of the Social Security system, because those who wait longer to retire are, in effect, paying for their own higher monthly benefits by delaying the receipt of those benefits. However, the Social Security actuaries offer a number of illustrative calculations on their website about possible steps to bolster the financing of the system. One proposal about phasing back the normal age of retirement looks like this:  \”After the normal retirement age (NRA) reaches 67 for those age 62 in 2022, increase the NRA 2 months per year until it reaches 69 for individuals attaining age 62 in 2034. Thereafter, increase the NRA 1 month every 2 years.\”

Thus, this proposal would represent no change in the rules for Social Security benefits for anyone born before 1960–and thus in their early 50s at present. Under this proposal, those born after 1960 would face the gradual phase-in–but of course, they would also benefit from having a program that is much closer to fully funded. would face the same phase-in as currently exists. The actuaries estimate that this step by itself would address about 44% of the gap over the next 75 years between what Social Security has promised and the funding that is expected during that time. Given the predicted shortfalls of the Social Security system in the future, and the gains in life expectancy both in the last few decades and expected in the next few decades, and the parlous condition of large budget deficits reaching into the future, I would be open to proposals to phase in a more rapid and more sustained rise in the normal retirement age for Social Security benefits.