|(1) Front Matter|
|Full-Text Access | Supplementary Materials|
|(2) Neuroeconomic Foundations of Economic Choice–Recent Advances|
|Ernst Fehr and Antonio Rangel|
|Neuroeconomics combines methods and theories from neuroscience psychology, economics, and computer science in an effort to produce detailed computational and neurobiological accounts of the decision-making process that can serve as a common foundation for understanding human behavior across the natural and social sciences. Because neuroeconomics is a young discipline, a sufficiently sound structural model of how the brain makes choices is not yet available. However, the contours of such a computational model are beginning to arise; and, given the rapid progress, there is reason to be hopeful that the field will eventually put together a satisfactory structural model. This paper has two goals: First, we provide an overview of what has been learned about how the brain makes choices in two types of situations: simple choices among small numbers of familiar stimuli (like choosing between an apple or an orange), and more complex choices involving tradeoffs between immediate and future consequences (like eating a healthy apple or a less-healthy chocolate cake). Second, we show that, even at this early stage, insights with important implications for economics have already been gained.|
|Full-Text Access | Supplementary Materials|
|(3) It\’s about Space, It\’s about Time, Neuroeconomics and the Brain Sublime|
|Marieke van Rooij and Guy Van Orden|
|Neuroeconomics has investigated which regions of the brain are associated with the factors contributing to economic decision making, emphasizing the position in space of brain areas associated with the factors of decision making—cognitive or emotive, rational or irrational. An alternative view of the brain has given priority to time over space, investigating the temporal patterns of brain dynamics to determine the nature of the brain\’s intrinsic dynamics, how its various activities change over time. These two ways of approaching the brain are contrasted in this essay to gauge the contemporary status of neuroeconomics.|
|Full-Text Access | Supplementary Materials|
|(4) Molecular Genetics and Economics|
|Jonathan P. Beauchamp|
|The costs of comprehensively genotyping human subjects have fallen to the point where major funding bodies, even in the social sciences, are beginning to incorporate genetic and biological markers into major social surveys. How, if at all, should economists use and combine molecular genetic and economic data from these surveys? What challenges arise when analyzing genetically informative data? To illustrate, we present results from a \”genome-wide association study\” of educational attainment. We use a sample of 7,500 individuals from the Framingham Heart Study; our dataset contains over 360,000 genetic markers per person. We get some initially promising results linking genetic markers to educational attainment, but these fail to replicate in a second large sample of 9,500 people from the Rotterdam Study. Unfortunately such failure is typical in molecular genetic studies of this type, so the example is also cautionary. We discuss a number of methodological challenges that face researchers who use molecular genetics to reliably identify genetic associates of economic traits. Our overall assessment is cautiously optimistic: this new data source has potential in economics. But researchers and consumers of the genoeconomic literature should be wary of the pitfalls, most notably the difficulty of doing reliable inference when faced with multiple hypothesis problems on a scale never before encountered in social science.|
|Full-Text Access | Supplementary Materials|
|(5) Genes, Eyeglasses, and Social Policy|
|Charles F. Manski|
|Someone reading empirical research relating human genetics to personal outcomes must be careful to distinguish two types of work: An old literature on heritability attempts to decompose cross-sectional variation in observed outcomes into unobservable genetic and environmental components. A new literature measures specific genes and uses them as observed covariates when predicting outcomes. I will discuss these two types of work in terms of how they may inform social policy. I will argue that research on heritability is fundamentally uninformative for policy analysis, but make a cautious argument that research using genes as covariates is potentially informative.|
|Full-Text Access | Supplementary Materials|
|(6) The Composition and Drawdown of Wealth in Retirement|
|James Poterba, Steven Venti and David Wise|
|This paper presents evidence on the resources available to households as they enter retirement. It draws heavily on data collected by the Health and Retirement Study. We calculate the \”potential additional annuity income\” that households could purchase, given their holdings of non-annuitized financial assets at the start of retirement. We also consider the role of housing equity in the portfolios of retirement-age households and explore the extent to which households draw down housing equity and financial assets as they age. Because home equity is often conserved until very late in life, for many households it may provide some insurance against the risk of living longer than expected. Finally, we consider how our findings bear on a number of policy issues, such as the role for annuity defaults in retirement saving plans.|
|Full-Text Access | Supplementary Materials|
|(7) Insuring Long-Term Care in the United States|
|Jeffrey R. Brown and Amy Finkelstein|
|Long-term care expenditures constitute one of the largest uninsured financial risks facing the elderly in the United States and thus play a central role in determining the retirement security of elderly Americans. In this essay, we begin by providing some background on the nature and extent of long-term care expenditures and insurance against those expenditures, emphasizing in particular the large and variable nature of the expenditures and the extreme paucity of private insurance coverage. We then provide some detail on the nature of the private long-term care insurance market and the available evidence on the reasons for its small size, including private market imperfections and factors that limit the demand for such insurance. We highlight how the availability of public long-term care insurance through Medicaid is an important factor suppressing the market for private long-term care insurance. In the final section, we describe and discuss recent long-term care insurance public policy initiatives at both the state and federal level.|
|Full-Text Access | Supplementary Materials|
|(8) Annuitization Puzzles|
|Shlomo Benartzi, Alessandro Previtero and Richard H. Thaler|
|In his Nobel Prize acceptance speech given in 1985, Franco Modigliani drew attention to the \”annuitization puzzle\”: that annuity contracts, other than pensions through group insurance, are extremely rare. Rational choice theory predicts that households will find annuities attractive at the onset of retirement because they address the risk of outliving one\’s income, but in fact, relatively few of those facing retirement choose to annuitize a substantial portion of their wealth. There is now a substantial literature on the behavioral economics of retirement saving, which has stressed that both behavioral and institutional factors play an important role in determining a household\’s saving accumulations. Self-control problems, inertia, and a lack of financial sophistication inhibit some households from providing an adequate retirement nest egg. However, interventions such as automatic enrollment and automatic escalation of saving over time as wages rise (the \”save more tomorrow\” plan) have shown success in overcoming these obstacles. We will show that the same behavioral and institutional factors that help explain savings behavior are also important in understanding 1) how families handle the process of decumulation once retirement commences and 2) why there seems to be so little demand to annuitize wealth at retirement.|
|Full-Text Access | Supplementary Materials|
|(9) The Case for a Progressive Tax: From Basic Research to Policy Recommendations|
|Peter Diamond and Emmanuel Saez|
|This paper presents the case for tax progressivity based on recent results in optimal tax theory. We consider the optimal progressivity of earnings taxation and whether capital income should be taxed. We critically discuss the academic research on these topics and when and how the results can be used for policy recommendations. We argue that a result from basic research is relevant for policy only if 1) it is based on economic mechanisms that are empirically relevant and first order to the problem, 2) it is reasonably robust to changes in the modeling assumptions, and 3) the policy prescription is implementable (i.e, is socially acceptable and not too complex). We obtain three policy recommendations from basic research that satisfy these criteria reasonably well. First, very high earners should be subject to high and rising marginal tax rates on earnings. Second, low-income families should be encouraged to work with earnings subsidies, which should then be phased-out with high implicit marginal tax rates. Third, capital income should be taxed. We explain why the famous zero marginal tax rate result for the top earner in the Mirrlees model and the zero capital income tax rate results of Chamley and Judd, and Atkinson and Stiglitz are not policy relevant in our view.|
|Full-Text Access | Supplementary Materials|
|(10) When and Why Incentives (Don\’t) Work to Modify Behavior|
|Uri Gneezy, Stephan Meier and Pedro Rey-Biel|
|First we discuss how extrinsic incentives may come into conflict with other motivations. For example, monetary incentives from principals may change how tasks are perceived by agents, with negative effects on behavior. In other cases, incentives might have the desired effects in the short term, but they still weaken intrinsic motivations. To put it in concrete terms, an incentive for a child to learn to read might achieve that goal in the short term, but then be counterproductive as an incentive for students to enjoy reading and seek it out over their lifetimes. Next we examine the research literature on three important examples in which monetary incentives have been used in a nonemployment context to foster the desired behavior: education; increasing contributions to public goods; and helping people change their lifestyles, particularly with regard to smoking and exercise. The conclusion sums up some lessons on when extrinsic incentives are more or less likely to alter such behaviors in the desired directions.|
|Full-Text Access | Supplementary Materials|
|(11) Retrospectives: X-Efficiency|
|In a 1966 article in the American Economic Review, Harvey Leibenstein introduced the concept of \”X-efficiency\”: the gap between ideal allocative efficiency and actually existing efficiency. Leibenstein insisted that absent strong competitive pressure, firms are unlikely to use their resources efficiently, and he suggested that X-efficiency is pervasive. Leibenstein, of course, was attacking a fundamental economic assumption: that firms minimize costs. The X-efficiency article created a firestorm of criticism. At the forefront of Leibenstein\’s powerful critics was George Stigler, who was very protective of classical price theory. In terms of rhetorical success, Stigler\’s combination of brilliance and bluster mostly carried the day. While Leibenstein\’s response to Stigler was well reasoned, it never resonated with many economists, and Leibenstein remains undeservedly underappreciated. Leibenstein\’s challenge is as relevant today as it ever was.|
|Full-Text Access | Supplementary Materials|
|(12) Recommendations for Further Reading|
|Full-Text Access | Supplementary Materials|
|Full-Text Access | Supplementary Materials|
The "Chermany" Problem of Unsustainable Exchange Rates
Martin Wolf coined the term \”Chermany\” in one of his Financial Times columns in March 2010. China and Germany have been running the largest trade surpluses in the world for the last few years. Moreover, one of the reasons they both have such large trade surpluses is that the exchange rate of their respective currencies is set at a low enough level vis-a-vis their main trading partners to assure a situation of strong exports and weaker imports. Germany\’s huge trade surpluses are part of the reason that the euro-zone is flailing. Could China\’s huge trade surpluses at some point be part of a broader crisis in the U.S. dollar-denominated world market for trade?
Start with some facts about Chermany\’s trade surpluses, using graphs generated from the World Bank\’s World Development Indicators database. The first graph shows their current account trade surpluses since 1980 expressed as a share of GDP: China in blue, Germany in yellow. The second graphs shows their trade surpluses in U.S. dollars. Notice in particular that the huge trade surpluses for Chermany are a relatively recent phenomenon. China ran only smallish trade surpluses or outright trade deficits up to the early 2000s. Germany ran trade deficits through most of the 1990s. In recent years, China\’s trade surpluses are larger in absolute dollars, but Germany\’s surpluses are larger when measured as a share of GDP.
Those who have trade surpluses, like China and Germany, wear them as a badge of economic virtue. Those with trade deficits, like the United States, like to complain about those trade surpluses as a sign of unfairness, and wear our own trade deficits as a hairshirt of economic shame. In my view, the balance of trade is the most widely misunderstood basic economic statistic.
The economic analysis of trade surpluses starts by pointing out that a trade surplus isn\’t at all the same thing as healthy economic growth. Economic growth is about better-educated and more-experienced workers, using steadily increasing amounts of capital investment, in a market-oriented environment where innovation and productivity are rewarded. Sometimes that is accompanied by trade surpluses; sometimes not.China had rapid growth for several decades before its trade surpluses erupted. Germany has been a high-income country for a long time without running trade surpluses of nearly this magnitude. Japan has been running trade surpluses for decades, with a stagnant economy over the last 20 years. The U.S. economy ran trade deficits almost every year since the 1980, but has had solid economic growth and reasonably low unemployment rates during much of that time.
Instead, think of trade imbalances as creating mirror images. A country like China can only have huge trade surpluses if another country, in this case the United States, has correspondingly large trade deficits. China\’s trade surplus means that it earns U.S. dollars with its exports, doesn\’t use all of those U.S. dollars to purchase imports, and ends up investing those dollars in U.S. Treasury bonds and other financial investments. China\’s trade surpluses and enormous holdings of U.S. dollar assets are the mirror image of U.S. trade deficits and the growing indebtedness of the U.S. economy.
For the European Union as a whole, its exports and imports are fairly close to balance. Thus, if any country like Germany is running huge trade surpluses, it must be balanced out by other EU countries running large trade deficits. Germany\’s trade surpluses mean that it was earning euros selling to other countries within the EU, not using all of those euros to buy from the rest of the EU, and ending up investing the extra euros in debt issued by other EU countries. In short, Germany\’s trade surpluses and build-up of financial holdings are the necessary flip side of the high levels of borrowing by Greece, Italy, Spain, Portugal, and Ireland.
Joshua Aizenman and Rajeswari Sengupta explored the parallels between Germany and China in an essay in October 2010 called: \”Global Imbalances: Is Germany the New China? A sceptical view.\” They carefully mention a number of differences, and emphasize the role of the U.S. economy in global imbalances as well. But they also point out the fundamental parallel that China runs trade surpluses and uses the funds to finance U.S. borrowing. They write: \”Ironically, Germany seems to play the role of China within the Eurozone, de-facto financing deficits of other members.\”
Of course, when loans are at risk of not being repaid, lenders complain. German officials blames the profligacy of the borrowers in other EU countries. Chinese officials like to warn the U.S. that it needs to rectify its overborrowing. But whenever loans go really bad, it\’s fair to put some of the responsibility on the lender, not just the borrowers.
If a currency isn\’t allowed to fluctuate–like the Chinese yen vs. the U.S. dollar, or like Germany\’s euro vs the euros of its EU trading partners–and if the currency is undervalued when compared with wages and productivity in trading partners, then huge and unsustainable trade imbalances will result. And without enormous changes in economic patterns of wages and productivity, as well as in levels of government borrowing, those huge trade imbalances will eventually lead to financial crisis.
A group of 16 prominent economists and central bank officials calling themselves the \”Committee on International Economic Policy and Reform\” wrote a study on \”Rethinking Central Banking\” that was published by the Brookings Institution in September 2011. They point out that most international trade used to be centered on developed countries with floating exchange rates: the U.S., the countries of Europe, and Japan. A number of smaller economies might seek to stabilize or fix their exchange rate, but in the context of the global macroeconomy, their effect was small. There was a sort of loose consensus that when economies became large enough, their currencies would be allowed to move.
But until very recently, China was not letting its foreign exchange rate move vis-a-vis the U.S. dollar. As the Committee points out: \”While a large part of the world economy has adopted this model,
some fast-growing emerging markets have not. The coexistence of floaters and fixers therefore remains a characteristic of the world economy. … A prominent instance of the uneasy coexistence of floaters and fixers is the tug of war between US monetary policy and exchange rate policy in emerging market “fixers” such as China.\” The Committee emphasizes that the resulting patterns of huge trade surpluses and corresponding deficits lead to spillover effects around the world economy.
The Brookings report doesn\’t discuss the situation of Germany and the euro, but the economic roots of an immovable exchange rate leading to unsustainable imbalances apply even more strongly to Germany\’s situation inside the euro area.
The world economy needs a solution to its Chermany problem: What adjustments should happen when exchange rates are fixed at levels that lead to unsustainably large levels of trade surpluses for some countries and correspondingly large trade deficits for others? Germany\’s problems with the euro and EU trading system are the headlines right now. Unless some policy changes are made, China\’s parallel problems with the U.S. dollar and the world trading system are not too many years away.
Are U.S. Banks Vulnerable to a European Meltdown?
The Federal Reserve conducts a Senior Loan Office Opinion Survey on Bank Lending Practices each quarter at about 60 large U.S. banks. The results of the October 2011 survey, released last week, suggest that these banks do not see themselves as facing a high exposure to risks of a European financial and economic meltdown.
Here are a couple of sample questions from the survey. The first question shows that of senior loan officers at 50 banks who answered the question, 36 say that less than 5% of their outstanding commercial and industrial loans are to nonfinancial companies with operations in the U.S. that have significant exposure to European economies. The second questions shows that out of 49 banks, 24 don\’t have any outstanding loans to European banks, and 17 of the 25 banks that do have such loans have tightened their lending standards somewhat or considerably in the last three months.
Of course, a European economic and financial meltdown could affect the U.S. economy in a number of ways: for example, by reducing U.S. exports to European markets, or by affecting nonbank financial institutions like money market funds, hedge funds or dealer-brokers. But banks, because they hold federally insured deposits, are more likely to be bailed out by taxpayers if they get into trouble. So it\’s good news (if it indeed turns out to be true!) that major U.S. banks don\’t have high exposure either to firms doing business in Europe or to European banks.
Unexpected Economics: My New Teaching Company Course
Start your holiday shopping now! The Teaching Company has just released my latest lecture course, called \”Unexpected Economics.\”
A detailed description of the course and a promotional video are at the company\’s website here. But for a brief overview, it\’s an economics course that barely (if at all) mentions recession, unemployment, inflation, budget deficits, international trade, poverty, monopoly, financial crises, and all the other standby topics of the standard intro economics class. Instead, there are lectures on the economics of religion, crime, obesity, natural disasters, cooperation, addition, charity, and other subjects. A list of the lectures with capsule descriptions is at the bottom of this post.
For professional economists, nothing in the list of topics will be especially \”unexpected\”: economists will know that many well-known and even Nobel laureate economists have worked on these issues. But the economic perspective on these subjects is likely to be unexpected, and therefore potentially illuminating (or sometimes infuriating?) for many others.
For those who somehow aren\’t on the mailing list to get the Teaching Company catalogs, the company focuses on what I\’d call serious education for generalists. Courses are often fairly long: my \”Unexpected Economics\” class is 24 half-hour lectures. Some previous courses I\’ve done for them, like my course on \”America and the Global Economy\” and the basic \”Economics\” course, are both 36 half-hour lectures.The courses presume no background at all in the subject, but they do presume that the listener has the patience to follow along and give you the space develop your topic.
I recorded my first course for the Teaching Company back in 1994. The courses that I\’ve recorded for them that are presently available, with links to the Teaching Company websites, include:
- Economics: An Introduction (third edition);
- America and the New Global Economy;
- Legacies of the Great Economists (audio only);
- History of the U.S. Economy in the 20th Century (audio only).
For some history of the Teaching Company and a sympathetic view of it mission, Heather MacDonald wrote an overview in the Summer 2011 issue of City Journal, \”Great Courses, Great Profits.\”
Here\’s the list of titles and capsule descriptions for the 24 lectures of \”Unexpected Economics\”:
1. The World of Choices
In this introductory lecture, encounter a definition of economics far broader than the one understood by most people. Also, learn that economics is about the choices you make in every aspect of life, their consequences, and the degree to which the realm of choice itself is larger than you would think.
2. A Market for Pregnancy
History demonstrates that most people are creatures of their own time in terms of what they will accept as appropriate economic transactions. Focus on three once-banned examples now seen as partly or completely ordinary: interest payments, life insurance, and the so-called \”baby market.\”
3. Selling a Kidney
The discussion of forbidden transactions continues with a provocative look at the controversy over the buying and selling of human organs like kidneys or livers. No matter where you stand on the issue, this consideration of benefits and costs offers a fresh perspective from which to consider your views of medicine and longevity.
4. Traffic Congestion—Costs, Pricing, and You
Can economic analysis do anything about traffic? This free-wheeling examination of the choices that produce both the problem and solution reveals an example of the \”tragedy of the commons\” and looks at how the city of London addressed the issue with a strategy of \”congestion pricing.\”
5. Two-Way Ties between Religion and Economics
Although links between religion and economics may seem counterintuitive to many, the possibility has interested economists ever since the work of Adam Smith. This lecture explores how several different aspects of religion may in fact contribute to the underpinnings of the real-world economy.
6. Prediction Markets—Windows on the Future
Can the likelihood of a future event be predicted? If so, how? Plunge into the world of prediction markets, where people place bets on what will happen, and get some surprising answers—in areas that include elections, Oscar winners, and the performance of new products.
7. Pathways for Crime and Crime Fighting
Step beyond the traditional nature-versus-nurture argument about the causes of crime to examine the problem from an economic perspective. Grasp how incentives and tradeoffs affect the decisions of both criminals and law enforcement and how policymakers might better take them into account.
8. Terrorism as an Occupational Choice
Is there a causal relationship between poverty and terrorism, or a pathway to terrorism originating in a lack of education? Discover some surprising answers as you turn your attention to the choices and incentives economists find when examining how terrorism and its practitioners find each other.
9. Marriage as a Search Market
By looking at the many factors that go into marriage—from the \”market\” where people find their spouses to the forces that influence their choices about remaining together or splitting apart—you\’ll gain fresh insight into why marriage patterns have shifted so dramatically.
10. Procreation and Parenthood
Beginning with the work of Thomas Malthus, explore the many ways in which our child-bearing patterns have changed, including a current theory that actually sees children as a kind of luxury good, where an increase in income translates into an even higher investment in one\’s children.
11. Small Choices and Racial Discrimination
Take a close look at how discrimination against African Americans manifests itself in various markets. Learn about the different analytical perspectives set forth by economists like Becker, Schelling, and Arrow; the impact of the Civil Rights Act of 1964; and three different approaches to achieving equality.
12. Cooperation and the Prisoner\’s Dilemma
A discussion of the classic Prisoner\’s Dilemma and its application to the market economy leads to a surprising conclusion: Competition and cooperation are not opposites. Instead, they are interlocking, with market competition the ultimate example of socially cooperative behavior.
13. Fairness and the Ultimatum Game
Turn your focus to the world of \”laboratory economics,\” where game exercises like the \”ultimatum game,\” \”dictator,\” \”punishment,\” \”trust,\” \”gift exchange,\” and others help cast fresh light on key aspects of economic behavior. Learn how perceived fairness and other motivating factors can affect real-life markets.
14. Myopic Preferences and Behavioral Economics
Myopia can refer to more than eyesight. Grasp the impact of setting aside the long-term view in favor of the shorter one, a pattern whose consequences can be seen not only in personal finance, but in health care, time management, and many other aspects of behavior.
15. Altruism, Charity, and Gifts
Is charity always altruistic, or is giving really motivated by disguised self-interest? Examine the many possible motivations for charity, societal steps to support it, and the imbalances that occur when a recipient\’s valuation of a gift is markedly different from that of the giver.
16. Loss Aversion and Reference Point Bias
How a choice is framed significantly affects which alternative is chosen, even when either will produce identical results. Grasp why this is so through concepts like loss aversion and reference dependence, and see how public \”nudge\” policies might be used to influence those choices.
17. Risk and Uncertainty
An eye-opening exploration of how economists see risk introduces behavioral descriptions like risk-neutral, risk-seeking, and risk-averse, along with the most useful way to realistically evaluate probabilities and make decisions. Learn how this knowledge factors into public policy subjects like energy and finance.
18. Human Herds and Information Cascades
Most of us understandably \”run with the herd\” when making decisions outside our personal expertise. Although this often leads to correct choices, it can also draw us astray. Learn the dangers of \”information cascades\” and how to avoid joining herds headed in disastrous directions.
19. Addiction and Choice
In a provocative examination of a controversial subject, learn how some economists have analyzed drug addiction as a chosen behavior—and therefore subject to analysis of costs, benefits, and incentives, much like any other personal choice.
20. Obesity—Who Bears the Costs?
American men and women have both gained an average of more than 20 pounds since the early 1960s. Address the reasons for this increase, the role played by economic factors, and the implications for public policy, including who really pays the costs obesity imposes.
21. The Economics of Natural Disasters
The damage natural disasters do to people and property is not just a matter of fate. Explore how the answers to economic questions about socially appropriate investments in advance planning and preparation for rapid response can influence the impact of a natural disaster.
22. Sports Lessons—Pay, Performance, Tournaments
Although sports account for a far less significant piece of the economy than most people assume, it does provide economists with especially clear examples with much broader applicability. Grasp what sports can teach us about issues like pay and promotion, valuing talent, and providing incentives for top performance.
23. Voting, Money, and Politics
What happens when voting—its costs, incentives, even its very rationality—is subjected to the scrutiny of economists? Understand the real impacts on this defining principle of democracy from factors like voter ignorance, money, special interests, and turnout.
24. The Pursuit of Happiness
If happiness is indeed the ultimate incentive for the choices we make, is there any way to measure it? This concluding lecture examines several ways in which economists have sought to define and measure this elusive goal and the ways in which the results might influence our choices.
Heavier Cars Kill
Michael Anderson and Maximilian Auffhammer from UC-Berkeley point out in NBER Working Paper #17170 released last June: \”The average weight of light vehicles sold in the United States has fluctuated substantially over the past 35 years. From 1975 to 1980, average weight dropped almost 1,000 pounds (from 4,060 pounds to 3,228 pounds), likely in response to rising gasoline prices and the passage of the Corporate Average Fuel Efficiency (CAFE) standard. As gasoline prices fell in the late-1980s, however, average vehicle weight began to rise, and by 2005 it had attained 1975 levels …\”
The trend is clearly visible in one of the annual reports from the EPA: Light-Duty Automotive Technology, Carbon Dioxide Emissions, and Fuel Economy Trends: 1975 Through 2010.
Notice that as the weight of cars increased for most of the last 30 years, horsepower and acceleration have also risen–although all three of these trends have been disrupted by the higher gasoline prices in the last couple of years.
The trend toward heavier cars has two tradeoffs I\’ll mention here: 1) improvements in car technology have all been going to horsepower and acceleration, instead of improved miles-per-gallon; and 2) heavier cars are more likely to lead to deaths when accidents occur.
Christopher R. Knittel of MIT has examined what could have happened if technological progress in the auto industry had been focused on improved miles-per-gallon. He writes: \”This paper estimates the technological progress that has occurred since 1980 in the automobile industry and the trade-offs faced when choosing between fuel economy, weight, and engine power characteristics. The results suggest that if weight, horsepower, and torque were held at their 1980 levels, fuel economy could have increased by nearly 60 percent from 1980 to 2006. (Knittel\’s paper is called \”Automobiles on Steroids: Product Attribute Trade-Offs and Technological Progress in the Automobile Sector.\” The paper is forthcoming in the American Economic Review: in the meantime, a PDF version is available at Knittel\’s website here.) Here\’s a figure from the EPA report showing the lack of progress in miles-per-gallon over most of the last three decades.
Another problem with heavier cars is that they are deadlier in accidents. The danger posed by heavier cars is the focus of the Anderson and Auffhamme working paper, which is titled \”Pounds that Kill: The External Costs of Vehicle Weight.\” The working paper is only available on-line by subscription, but a short summary written by Lester Picker is available here.
The authors write: \”We present robust evidence that increasing striking vehicle weight by 1,000 pounds increases the probability of a fatality in the struck vehicle by 40% to 50%. This finding is unchanged across different specifications, estimation methods, and different subsets of the sample. We show that there are also significant impacts on serious injuries.\” They find that when heavier vehicles collide with each other, the fatality rate is not higher. But when heavier vehicles collide with lighter vehicles, motorcycles, or pedestrians, the death rate is higher. They estimate the cost of increased fatalities alone–that is, not counting the costs of more serious injuries–at $93 billion per year.
Many U.S. consumers have clearly demonstrated their preference over the last three decades for heavier cars with more horsepower and acceleration. For at least some consumers, a larger car is a choice made in self-defense, given the dangers posed by a collision with all the other large cars on the roads. But while owners of large cars are engaging in their version of an arms race, the rest of us face greater risks of injury and death in an accident. In Knittel\’s paper, he points out that the Obama administration has been seeking to raise the average miles-per-gallon standard to 35.5 mpg by 2016. Knittel argues that the only way to accomplish this goal will be for the average car to get lighter–which should also save some lives.
Martin Shubik\’s Dollar Auction Game
Martin Shubik\’s endowed chair at Yale University is the Seymour Knox Professor Emeritus of Mathematical Institutional Economics. Most of the time, \”mathematical\” and \”institutional\” economists are separate people. But Shubik\’s career has combined both deep mathematical insights about strategic behavior and also applications to financial, corporate, defense, and other institutions. The opening pages of the just-arrived October 2011 issue of the American Economic Review offer a short tribute to Shubik, who was named a Distinguished Fellow of the American Economic Association in 2010. A list of past Distinguished Fellows is here; the short description of Shubik\’s work from the AER is here.
One of my favorites of Shubik\’s papers, in part because it is so accessible that it can readily be used with introductory students and in part because it gives a sense of how his mind works, is the Dollar Auction Game. The Dollar Auction Game is in some ways similar to the better-known prisoner\’s dilemma, because it illustrates how two parties each pursuing their own self-interest can end up with an outcome that makes both of them worse off. The first published discussion of the game is in the Journal of Conflict Resolution, March 1971, pp. 109-111, which is not freely available on-line but can be found on JSTOR.
The rules of the Dollar Auction are deceptively simple: \”The auctioneer auctions off a dollar bill to the highest bidder, with the understanding that both the highest bidder and the second highest bidder will pay. For example, if A has bid 10 cents and B has bid 25 cents, pay a dollar to B, and A will be out 10 cents.\”
Now consider how the game unfolds. Imagine that two players are willing to bid small amounts, enticed by the prospect of the reward. Then the logic of the game takes hold. Say that player A has bet 20 cents and player B has bet 25 cents. Player A reasons: \”If I quit now, I lose 20 cents. But if I bid 30 cents, I have a chance to the $1 and thus gain 70 cents.\” So Player A bids more. But the same logic applies for Player B: lose what was already bid, or bid more.
But if both players continue to follow this logic, they find their bids steadily climbing past 50 cents apiece: in other words, the sum of their bids exceeds the dollar for which they are bidding. They approach bidding $1 each–but even reaching this level doesn\’t halt the logic of the game. Say that A has bid 95 cents, and B has bid $1. Player A reasons: \”If I quit now, I lose 95 cents. But if I bid $1.05, and win the dollar, I lose only 5 cents.\” So Player A bids more than a dollar, Player B, driven by the same logic, bids higher as well.
Apparently, Shubik and his colleagues liked to play these games at parties. As he writes in the 1971 article: \”In playing this game, a large crowd is desirable. Furthermore, experience has indicated that the best time is during a party when spirits are high and the propensity to calculate does not settle in until after at least two bids have been made. … Once two bids have been obtained from the crowd, the paradox of escalation is real. Experience with the game has shown that it is possible to \”sell\” a dollar bill for considerably more than a dollar. A total of payments between three and five dollars is not uncommon. … This simple game is a paradigm for escalation. Once the contest has been joined, the odds are that the end will be a disaster to both. When this is played as a parlor game, this usually happens.\”
With any game of this sort, two sorts of questions arise: Under what conditions can the players sidestep the escalation? And does this simple game address real-world phenomena?
Of course, the two players can avoid the escalation if they communicate with each other, agree not to increase their bids, and perhaps also agree to split the gains. It may be necessary in this situation to enforce this agreement with a threat: for example, I will stop bidding and you will also stop bidding, but if you bid again, I will immediately jump my bid to $1, and force us both to take losses. Or unless you stop bidding, I vow to bid forever, no matter the losses. Of course, whether these threats are credible and believable would be an issue.
Another exit strategy from the game is for one player to see where the game is headed, and to stop bidding. Notice that by bailing out of the game, the player who stops is a \”loser\” in a relative sense: that is, the other player gets the dollar. But by bailing out sooner, the player who stops actually prevents both players from further escalation and ending up as even bigger losers. A more extreme version of this strategy is that a player may refuse to follow the rules, perhaps declaring the game to be \”unfair,\” and refuse to be bound by paying that player\’s previous bid. To avoid this possibility, perhaps the bids would need to be handed to the auctioneer as the bidding proceeds.
Again, the bottom line of the Dollar Auction game is to illustrate a simple setting in which self-interested behavior leads to losses for both players–in this case, to escalating losses until one of the players decides that enough is enough.
The Dollar Auction Game is simple enough that it doesn\’t fit perfectly with real-world situation. However, John Kay argued in a Financial Times op-ed in July that many of our decisions in the last few years about bailing out financial institutions and now countries like Greece have a \”dollar auction\” aspect to them, in the sense that governments keep thinking that if they just make one more bid, they will have gains–or at least they will reduce the size of their losses. What the governments don\’t seem to realize is that the other parties in the economy will keep making another bid as well, forcing the government to make yet another bid.
Perhaps the deeper wisdom here is that when entering into a competitive situation, it\’s useful to look ahead and have a clear vision of what the end-game would look like. If you find yourself in a situation of escalation, by all means try instead to negotiate for a way in which both sides can combine a strategy of lower bids and bid-curdling threats–and then end up sharing the prize. But if negotiation seems impossible, and appealing to the rationality of the other player doesn\’t work it is better to bail out from the bidding rather than continue escalation with an irrational player. Better to take the immediate loss, and to let the less-rational player win the dollar, than to build up to larger losses. As John Kay writes: \”In the dollar bill auction, one party eventually scores a pyrrhic victory and takes possession of the dollar bill. Both parties lose, but the smaller loser is the person who sticks out longest. That is not usually the rational player.\”
Grade Inflation and Choice of Major
Like so many other bad habits, grade inflation is lots of fun until someone gets hurt. Students are happy with higher grades. Faculty are happy not quarreling with students about grades.
When I refer to someone getting hurt by grade inflation, I\’m not talking about the sanctity of the academic grading process, which is a mildly farcical concept to begin with and at any rate too abstract for me. I\’m also not referring to how it gets harder for law and business schools to sort out applicants when so many students have high grades. In the great list of social problems, the difficulties of law and B-school admissions offices don\’t rank very high.
To me, the real and practical problem of grade inflation is that it causes students to alter their choices, away from fields with tougher grading, like the sciences and economics, and toward fields with easier grading.
A couple of recent high-profile newspaper stories have highlighted that college and university courses in the \”STEM\” areas of science, technology, engineering and mathematics tend to have lower average grades than courses in humanities, which is one factor that discourages students from pursuing those fields. Here\’s an overview of those stories, and then some connections to more academic treatments of the topic from my own Journal of Economic Perspectives.
A New York Times story on November 4, by Christopher Drew, was titled, \”Why Science Majors Change Their Minds (It’s Just So Darn Hard).\” Drew writes: \”Studies have found that roughly 40 percent of students planning engineering and science majors end up switching to other subjects or failing to get any degree. That increases to as much as 60 percent when pre-medical students, who typically have the strongest SAT scores and high school science preparation, are included, according to new data from the University of California at Los Angeles. That is twice the combined attrition rate of all other majors.\”
Part of the reason is that most of the STEM fields start off with a couple of years of tough, dry, abstract courses, for which many students are not academically or emotionally prepared. Another reason is that the grading in these courses is tougher than in non-STEM fields. Drew describes some of the evidence: \”After studying nearly a decade of transcripts at one college, Kevin Rask, then a professor at Wake Forest University, concluded last year that the grades in the introductory math and science classes were among the lowest on campus. The chemistry department gave the lowest grades over all, averaging 2.78 out of 4, followed by mathematics at 2.90. Education, language and English courses had the highest averages, ranging from 3.33 to 3.36. Ben Ost, a doctoral student at Cornell, found in a similar study that STEM students are both “pulled away” by high grades in their courses in other fields and “pushed out” by lower grades in their majors.\”
(For those who want the underlying research, the Rask paper is available here, and the Ost paper is available
On November 9, the Wall Street Journal had a story called \”Generation Jobless: Students Pick Easier Majors Despite Less Pay,\” written by Joe Light and Rachel Emma Silverman.
\”Although the number of college graduates increased about 29% between 2001 and 2009, the number graduating with engineering degrees only increased 19%, according to the most recent statistics from the U.S. Dept. of Education. The number with computer and information-sciences degrees decreased 14%.\” Again, part of the problem is insufficient preparation before college for the STEM classes, and part is the discouragement of getting lower grades than those in non-STEM fields. Also, even with lower grades, the STEM majors are more work: \”In a recent study, sociologists Richard Arum of New York University and Josipa Roksa of the University of Virginia found that the average U.S. student in their sample spent only about 12 to 13 hours a week studying, about half the time spent by students in 1960. They found that math and science—though not engineering—students study on average about three hours more per week than their non-science-major counterparts.\”
(For those who want to go for original sources, Arum and Roksa discuss the several thousand students that they surveyed over several years in their 2011 book Academically Adrift: Limited Learning on College Campuses. When they make comparisons back to how much students studied in 1960s, they are drawing on work by Philip Babcock and Mindy Marks. For a readable overview of that work, see their August 2010 essay on \”Leisure College, USA: The Decline in Student Study Time,\” written as an Education Policy brief for the American Enterprise Institute. For the technical academic version of their work, see their essay in the May 2011 Review of Economics and Statistics (Vol. 93, No. 2, Pages 468-478), \”The Falling Time Cost of College: Evidence from Half a Century of Time Use Data.\”
As noted, there are lots of reasons why students don\’t persevere in STEM courses: inadequate preparation at the high school level, students who have unrealistic expectations or don\’t want to commit the time to studying, or that the courses are just hard. It\’s of course possible to address these issues, but difficult. However, if one of the issues discouraging students from taking STEM courses is that grade inflation is happening faster in the humanities, then surely, this cause at least is fixable? In my own Journal of Economic Perspectives, which is freely available from the current issue going back to the late 1990s courtesy of the American Economic Association, several authors have taken a stab at quantifying the differences in grades across majors and what difference it makes to course choice.
The first such paper we published was back in the Winter 1991 issue. It was by Richard Sabot and John Wakemann-Linn, and called \”Grade Inflation and Course Choice.\” It\’s too far back to be freely available on-line, but it\’s available through JSTOR. The complaints in that article sound quite familiar. They write:
\”The number of students graduating from American colleges and universities who had majored in the sciences declined from 1970-71 to1984-85, both as a proportion of the steadily growing total and in
absolute terms. … Students make their course choices in response to a powerful set of incentives: grades. These incentives have been systematically distorted by the grade inflation of the past 25 years. As a consequence of inflation, many universities have split into high- and low-grading departments. Economics, along with Chemistry and Math, tends to be low-grading. Art, English, Philosophy, Psychology, and Political Science tend to be high-grading.\” They present more evidence on grade inflation and course choice from Amherst College, Duke University, Hamilton College, Haverford College, Pomona College, the University
of Michigan, the University of North Carolina and the University of Wisconsin, and more detailed analysis from their own Williams College. As they write: \”This sample is admittedly small, but was selected so as to include private and state schools, large universities and small colleges, and Eastern, Southern, Midwestern and Western schools.\”
Based on more detailed statistical analysis from Williams College, where they have access to more detailed data, they write: \”Our simulation indicated that if Economics 101 grades were distributed as they are in English 101, the number of students taking one or more courses beyond the introductory course in Economics would increase by 11.9 percent. Conversely, if grades in English 101 were distributed as they are in Economics 101, the simulation indicated that the number of students taking one or more courses beyond the introductory course in English would decline by 14.4 percent. The results of applying this method to-the Math department, which had the lowest mean grade and the highest dispersion of grades, are more striking. If the Math department adopted in its introductory course the English 101 grading distribution, our simulation indicated an 80.2 percent increase in the number of students taking at least one additional Math course! Alternatively, if the English department adopted the Math grade distribution, there would be a decline of 47 percent in the number of students taking one or more courses beyond the introductory course in English.\”
We took another swing at the issue of grades and course choice with a couple of articles in our Summer 2009 issue. Alexandra C. Achen and Paul N. Courant asked \”What Are Grades Made Of?\” They argue: \”Grades are an element of an intra-university economy that determines, among other things, enrollments and the sizes of departments. … Departments generally would prefer small classes populated by excellent and highly motivated students. The dean, meanwhile, would like to see departments supply some target quantity of credit hours—the more the better, other things equal—and will penalize departments that don’t do enough teaching. In this framework, grades are one mechanism that departments can use to influence the number of students who will take a given class.\”
Focusing on 25 years of grade data from the University of Michigan, they find: \”First, the distribution of grades is likely to be lower where courses are required, and where there are agreed-upon and readily assessed criteria—right or wrong answers—for grading. By contrast, departments that evaluate student performance using interpretative methods will tend to have higher grades, because using these methods increases the personal cost to instructors of assigning and defending low grades. Second, upper-division classes are likely to have higher grades than lower-division classes, both because students have selected into the upper-division courses where their performance is likely to be stronger and because faculty want to support (and may even like) their student majors. Third, grades can be used in conjunction with other tools to attract students to departments that have low enrollments and to deter students from courses of study that are congested. We find some evidence in support of each of these patterns. As it happens, the consequence of the preceding tendencies is that, indeed, the sciences (mostly) grade harder than the humanities. …\”
\”We argue that differential grading standards have potentially serious negative consequences for the ideal of liberal education. At the same time, we conclude that any discussion of a policy response to grade inflation must begin by recognizing that American colleges and universities are now in at least the fifth decade of well-documented grade inflation and differences in grading norms by field. Current grading behavior must and will be interpreted in the context of current norms and expectations about grades, not according to some dimly imagined (anyone who actually remembers it is retired) age of uniform standards across departments. Proposals that attempt to alter grading behavior will face the costs of acting against prevailing customs and expectations, whether in altering pre-existing patterns of grades across departments within a college or university or in attempting to alter grades in one institution while recognizing that other universities may
In that same issue, Talia Bar, Vrinda Kadiyali, and Asaf Zussman discuss one proposal to alter the incentives for grade inflation about \”Grade Information and Grade Inflation: The Cornell Experiment.\” They report that in \”April 1996, the [Cornell] Faculty Senate voted to adopt a new grade reporting policy which had two parts: 1) the publication of course median grades on the Internet; and 2) the reporting of course median grades in students’ transcripts. … Curbing grade inflation was not explicitly stated as a goalof this policy. Instead, the stated rationale was that “students will get a more
accurate idea of their performance, and they will be assured that users of the transcript will also have this knowledge.\”
To given a sense of the institutional obstacles here, they report that while median grades were publicly available on-line in 1998, at the time the article was written this information did not yet appear on actual student transcripts. As they point out, making this information available may have the undesired effect of encouraging students even more to take courses with easier grades! They also argue that the propensity to take easier-grading courses will be greater for lower-ability students. Thus, student will tend to sort themselves into higher-ability students in tougher-grading classes, and lower-ability students in easier-grading classes. Indeed, they estimate that nearly half of the grade inflation for Cornell as a whole, in the years after median grades were posted on the web, was attributable to students sorting themselves out in this way.
In short, grade inflation in the humanities has been contributing to college students moving away from science, technology, engineering, and math fields, as well as economics, for the last half century. It\’s time for the pendulum to start swinging back. A gentle starting point would be to making the distribution of grades by institution and by academic department (or for small departments, perhaps grouping a few departments together) publicly available, and perhaps even to add this information to student transcripts. If that answer isn\’t institutionally acceptable, I\’m open to alternatives.
Job Openings, Labor Turnover, and the Beveridge Curve
The Bureau of Labor Statistics has just put out its \”Job Openings and Labor Turnover Survey Highlights\” with data up through September 2011–and lots of nice graphs and explanations. The 2010 Nobel Prize in Economics went toPeter A. Diamond, Dale T. Mortensen, Christopher A. Pissarides \”for their analysis of markets with search frictions.\” Their work is a reminder that unemployment is not just about a shortfall in demand, but is also a matter of search and matching by potential workers and employers. The JOLTS data offers the factual background on job openings, separations, hires, and more. The overall picture is of an unpleasantly stagnant labor market.
As a starting point, look at the relationship between hires, separations, and employment. Most of the time, the red line showing job separations and the blue line showing hires are pretty close together, with one just a bit over the other. When separations exceed hires for some months running in the early 2000s, total employment declines. Then hires exceed separations by a bit, month by month, and total employment grows. During the Great Recession, hiring drops off a cliff and separations fall sharply as well (more on that in a second). Total employment has rebounded a bit since late 2009, but it\’s interesting to note that the levels of hires and separations remain so low. Those with jobs are tending to stay in them; those without jobs aren\’t getting hired at a rapid rate.
What explains why job separations would fall during the Great Recession? After all, don\’t more people lose their jobs in a recession? Yes, but the category of \”job separations\” has two parts: voluntary quits and layoffs/discharges. During the recession, layoffs and discharges do rise sharply as shown by the red line, but quits fall even faster as shown by the blue line, as those with jobs hung on to them. Overall, job separations decreased. Notice that in the last year or so, layoffs and discharges have actually been relatively low compared to the pre-recession years in the mid-2000s. Quits have stayed lower, too.
The number of job openings in an economy tends to be a leading indicator for changes in employment. Notice the sharp drop in job openings in the recession of 2001, and again the sharp drop in job openings during the 2007-2009 recession. Employment levels decline soon after. However, it\’s interesting to note the upturn in job openings since the low point in July 2009, and how employment has correspondingly grown.
Looking at the ratio of job openings to the unemployed gives a sense of how difficult it is to find a job at any given time. Before the recession of 2001, there were about 1.2 unemployed people per job opening. In the aftermath of the \”jobless recovery\” from that recession, there were about 2.8 unemployed people per job opening in late 2003. There were about 1.5 unemployed people per job opening in mid-2007, but just after the end of the recession in late 2009, there were almost 7 unemployed people for every job opening. This statistic helps to emphasize that it isn\’t just that the unemployment rate remains high, but that unemployed people in a stagnant labor market, with low hiring and few separations, objectively will have a hard time finding jobs.
A final figure from this data is called the Beveridge curve. BLS explains: \”This graphical representation of the relationship between the unemployment rate and the vacancy rate is known as the Beveridge Curve, named after the British economist William Henry Beveridge (1879-1963). The economy’s position on the downward sloping Beveridge Curve reflects the state of the business cycle. During an expansion, the unemployment rate is low and the vacancy rate is high. During a contraction, the unemployment rate is high and the vacancy rate is low.\” The figure is usefully colored in time segments, so the period before the 2001 recession is in light blue in the upper left corner; the 2001 recession is the darker blue line t; the period of growth in the mid-2000s is the red line; the Great Recession is the green line; and the period since the recession is the purple line at the bottom right. The severity of the Great Recession is apparent as the green line stretches down to the right, with much higher unemployment and lower rates of job openings than the 2001 recession.
But the Beveridge curve also raises an interesting question: Is the economy getting worse at matching people with jobs? The low levels of hiring and separations suggest a stagnant labor market. The Beveridge curve might be another signal. As BLS explains: \”The position of the curve is determined by the efficiency of the labor market. For example, a greater mismatch between available jobs and the unemployed in terms of skills or location would cause the curve to shift outward, up and toward the right.\” Notice that as the number of job vacancies has increased, the unemployment rate hasn\’t fallen as quickly as one might expect. To put it another way, the purple line is not retracing its way back up the green line of the Great Recession, but instead is above it and to the right.
Of course, relationships in the economy aren\’t going to be as precise as, say, the relationship between altitude and air pressure. There isn\’t yet enough data to prove whether the Beveridge curve has in fact shifted out. But if it has indeed become harder in the U.S. economy to match unemployed workers with job openings–perhaps because the skills that employers are searching for are not the same as the skills that the unemployed have to offer?–then it will be even harder to bring down the unemployment rate.
An Alternative Poverty Measure from the Census Bureau
When the Census Bureau released its annual estimates of the poverty statistics in September, I mentioned some of the main themes in U.S. Poverty by the Numbers. I also mentioned that the Census Bureau was going to follow up with a report offering an alternative measure of poverty, which has now been published. Kathleen Short describes \”The Research Supplemental Poverty Measure: 2010\” in Current Population Reports P60-241.
As a starting point, here\’s my one-paragraph overview of the genesis of the current poverty line, taken from Chapter 16 of my Principles of Economics textbook available from Textbook Media:
\”In the United States, the official definition of the poverty line traces back to a single person: Mollie Orshansky. In 1963, Orshansky was working for the Social Security Administration, where she published an article called \”Children of the Poor\” in a highly useful and dry-as-dust publication called the Social Security Bulletin. Orshansky\’s idea was to define a poverty line based on the cost of a healthy diet. Her previous job had been at the U.S. Department of Agriculture, where she had worked in an agency called the Bureau of Home Economics and Human Nutrition, and one task of this of this bureau had been to calculate how much it would cost to feed a nutritionally adequate diet to a family. Orshansky found evidence that the average family spent one-third of its income on food. Thus, she proposed that the poverty line be the amount needed to buy a nutritionally adequate diet, given the size of the family, multiplied by three. The current U.S. poverty line is essentially the same as the Orshansky poverty line, although the dollar amounts are adjusted each year to represent the same buying power over time.\”
It has been argued for at least a couple of decades that while a poverty line defined in this way is workable, it can be improved. Back in 1995, a National Academy of Sciences Panel made some recommendations for a new approach to measuring poverty. Kathleen Short summarizes the main concerns of the NAS panel near the start of her report:
- \”The current income measure does not reflect the effects of key government policies that alter the disposable income available to families and, hence, their poverty status. Examples include payroll taxes, which reduce disposable income, and in-kind public benefit programs such as the Food Stamp Program/Supplemental Nutrition Assistance Program (SNAP) that free up resources to spend on nonfood items.
- The current poverty thresholds do not adjust for rising levels and standards of living that have occurred since 1965. The official thresholds were approximately equal to half of median income in 1963–64. By 1992, one half median income had increased to more than 120 percent of the official threshold.
- The current measure does not take into account variation in expenses that are necessary to hold a job and to earn income—expenses that reduce disposable income. These expenses include transportation costs for getting to work and the increasing costs of child care for working families resulting from increased labor force participation of mothers
- The current measure does not take into account variation in medical costs across population groups depending on differences in health status and insurance coverage and does not account for rising health care costs as a share of family budgets.
- The current poverty thresholds use family size adjustments that are anomalous and do not take into
account important changes in family situations, including payments made for child support
and increasing cohabitation among unmarried couples.
- The current poverty thresholds do not adjust for geographic differences in prices across the nation, although there are significant variations in prices across geographic areas.\”
Over the last few years, the Census Bureau has been rethinking what it means to be \”poor,\” and developing an alternative measure of poverty that addresses many of these issues. It starts with a dollar threshold of what is needed in a certain geographic area to buy a basic set of goods, including food, housing, shelter, and utilities. It attempts to include in income the value of anti-poverty programs that offer in-kind benefits, like food stamps, and those that work through the tax code, like the earned income tax credit. It also includes expenses for income taxes, payroll taxes, childcare expenses, transportation-to-work expenses, and out-of-pocket medical care costs. Instead of looking at a \”household\” as defined by family dies of birth, marriage, and adoption, the new measure basically looks at everyone living in a \”consumer unit\” at the same address, regardless of whether they are related.
When all this is done, what picture of poverty in the United States emerges? How does that picture of poverty differ from the official existing poverty rates? Here are some main themes:
The absolute number of people below the poverty line is much the same, but slightly higher. In 2010, there were 46.6 million people below the official poverty line, for a poverty rate of 15.2%; with the new Supplemental Poverty Measure, it would have been 49.1 million people below the poverty line, for a poverty rate of 16.0%. In this sense, my quick reaction is that the existing poverty line has held up fairly well. However, the Supplemental Poverty Measure identifies a somewhat different group of people as poor.
One striking difference is poverty rates by age. Under the official poverty rate, it has long been true that poverty rate for those age 18 and younger is much higher than the poverty rate for those 65 and older: in 2010, the official \”under 18 years\” poverty rate was 22.5%, while the \”over 65\” poverty rate was 9.0%. However, under the new Supplemental Poverty Measure, the \”under 18\” poverty rate would be lower at 18.2%, while the \”over 65\” poverty rate would be 15.9%. Part of the reason here is that the official poverty rates have a different standard for the over-65 group, while the SPM does not. Food stamps and the earned income tax credit and looking at shared \”consumer units\” tends to reduce poverty rates among children, while taking out-of-pocket medical care expenses into account tends to increase poverty rates among the elderly.
Other differences emerge as well. Although the overall poverty rate would be higher under the Supplemental Poverty Measure, for certain groups the Supplemental Poverty Measure rate would be lower. For example, the official poverty rate for blacks in 2010 was 27.5% under the official measure, but 25.4% under SPM. The poverty rate for renters was 30.5% under the official measure, but 29.4% under the SPM. The poverty rate for those living outside metropolitan statistical areas was 16.6% with the official measure, but would be 12.8% under the SPM. The poverty rate in Midwestern states was 14.0% with the official measure in 2010, but would be 13.1% under the SPM.
At present, the Census Bureau treats the Supplemental Poverty Measure as \”a research operation,\” and says that it will \”improve the measures presented here as resources allow.\” The official poverty line will remain the line that is used in legislation and as a basis for eligibility for various government programs. This seems wise to me. One great virtue of the existing poverty line is that it isn\’t changing each year in response to research or political calculations, so it serves as a steady, if imperfect, standard of comparison over time.
But the Supplemental Poverty Measure as it develops seems sure to become part of our national conversation about poverty, because the way it is calculated raises questions about what it means to be for a \”consumer unit\” to be poor, and what it means to define poverty across a large country with many local and regional differences.
A State-Level Gold Standard?
Barry Eichengreen provides \”A Critique of Pure Gold\” in the September/October issue of the National Interest. He speaks for most economists in referring to the idea of a return to the gold standard as \”an oddball proposal,\” and explains why in some detail. What caught my eye is that apparently some states have been considering requiring payments in the form of gold–a sort of mini-gold standard. Eichengreen writes:
\”A Montana measure voted down by a narrow margin of fifty-two to forty-eight in March would have required wholesalers to pay state tobacco taxes in gold. A proposal introduced in the Georgia legislature would have called for the state to accept only gold and silver for all payments, including taxes, and to use the metals when making payments on the state’s debt.
In May, Utah became the first state to actually adopt such a policy. Gold and silver coins minted by the U.S. government were made legal tender under a measure signed into law by Governor Gary Herbert. Given the difficulty of paying for a tank of gas with a $50 American eagle coin worth some $1,500 at current market prices, entrepreneurs then floated the idea of establishing private depositories that would hold the coin and issue debit cards loaded up with its current dollar value. It is unlikely this will appeal to the average motorist contemplating a trip to the gas station since the dollar value of the balance would fluctuate along with the current market price of gold. It would be the equivalent of holding one’s savings in the form of volatile gold-mining stocks.
Historically, societies attracted to using gold as legal tender have dealt with this problem by empowering their governments to fix its price in domestic-currency terms (in the U.S. case, in dollars).\”
It is odd, to say the least, than many of those who favor a gold standard have also been investing in gold hoping to see its price rise. But as Eichengreen notes, in a gold standard, the price of gold would typically be set at a fixed level–historically, often a level below what would otherwise have been the market price. When President Richard Nixon officially ended what remained of the gold standard in 1971, gold was only used to pay debts to foreign governments holding U.S. dollars, and at a fixed price of $35/ounce.
Eichengreen traces how the idea of a gold standard has re-entered public discourse, championed by Ron Paul, who in turn refers to the work of Friedrich Hayek. But as Eichengreen reminds us, while Hayek was a fierce critic of central banking, and argued that central bankers needed to be controlled lest they conduct monetary policy in a way that feeds cycles of boom and bust in the economy, Hayek did not support a gold standard. In Eichengreen\’s words, summing up Hayek\’s standard arguments against a gold standard:
\”At the end of The Denationalization of Money, Hayek concludes that the gold standard is no solution to the world’s monetary problems. There could be violent fluctuations in the price of gold were it to again become the principal means of payment and store of value, since the demand for it might change dramatically, whether owing to shifts in the state of confidence or general economic conditions. Alternatively, if the price of gold were fixed by law, as under gold standards past, its purchasing power (that is, the general price level) would fluctuate violently. And even if the quantity of money were fixed, the supply of credit by the banking system might still be strongly procyclical, subjecting the economy to destabilizing oscillations, as was not infrequently the case under the gold standard of the late nineteenth and early twentieth centuries.\”
Hayek\’s answer to the problems of unrestricted central bankers was to allow the rise of private sources of money. Eichengreen continues:
\”For a solution to this instability, Hayek himself ultimately looked not to the gold standard but to the rise of private monies that might compete with the government’s own. Private issuers, he argued, would have an interest in keeping the purchasing power of their monies stable, for otherwise there would be no market for them. The central bank would then have no option but to do likewise, since private parties now had alternatives guaranteed to hold their value.\”