The Case For and Against Advertising

Advertising may be that rare case where economists are less cynical than the general public. For many people, advertising is the epitome of exploitative and wasteful capitalism, encouraging people to feel bad unless they spend money they don\’t have for goods and services they don\’t actually want or need. Many business firms spend their advertising dollars ruefully,while quoting the old line: \”I know that half of the money I spending on advertising is wasted, but the problem is that I don\’t know which half.\” 
 But while many see this glass as totally empty, economist see it as half-full. Yes, advertising can be a arms-war of expenditures that benefits no one, while creating consumer dissatisfaction. But it can also be a form of active competition leading to lower prices and better products for consumers. Here, I\’ll give a few facts about advertising, review the classic arguments from back in Alfred Marshall\’s 1919 classic Industry and Trade, and point out a bit of recent evidence that consumers may well benefit from lower prices when advertising expands.
  Kantar Media reports that total U.S. advertising spending was $144 billion in 2011–which is about 1% of GDP. That money is pretty well spread out across advertisers and across different kinds of media: for example, the top 10 advertisers account for only a bit more that 10% of total advertising spending.

At the global level, according to Nielson\’s Global AdView Pulse Report, advertising totalled almost $500 billion in 2011.

 

The great economist Alfred Marshall laid out the social pros and cons of advertising in his 1919 book, Industry and Trade. On one side, he emphasizes the role of advertising in providing information and building reputations, names, and trademarks; on the other hand, he points out that it can be carried to excess. Here are a few of Marshall\’s comments. Even a century ago, he was skeptical about how American advertising, in particular, was often being carried to excess.  

 \”The reputation acquired by large general advertising is easy of attainment, though expensive. It is indeed seldom of much value, unless accompanied by capable and honourable dealing: but, when attained, it extends in varying degrees to all products made or handled by the business: a name or a trade mark which has gained good fame in regard to one product is a great aid to the marketing of others.\” (p. 180)

\”On the other hand, some sorts of private retail trade are spending lavishly on competitive advertisements, most of which waste much of their force in neutralizing the force of rivals. In America, where they have been developed with more energy and inventive force than anywhere else …\” (p. 195)

\”Some of the implements of constructive advertisement are prominent in all large cities.  For instance a good frontage on a leading thoroughfare; adequate space for the convenience of employees and for customers; lifts and moving staircases, etc., are all constructive, so long as they do not exceed the requirements of the business. That is to say, the assistance, which they afford to customers by enabling them to satisfy their wants without inordinate fatigue or loss of time, would be appropriate, even if the business were not in strong rivalry with others. But eager rivalry often causes them to be carried to an excess, which involves social waste; and ultimately tends to raise the charges which the public have to meet without adequate return.\” (p. 200)

\”On the other hand the combative force of mere capital obtrudes itself in the incessant  iteration of the name of a product, coupled perhaps with a claim that it is of excellent quality. Of course no amount of expenditure on advertising will enable any thing, which the customers can fairly test for themselves by experience (this condition excludes medicines which claim to be appropriate to subtle diseases, etc.), to get a permanent hold on the people, unless it is fairly good relatively to its price. The chief influence of such advertisement is exerted, not through the reason, but through the blind force of habit: people in general are, for good and for evil, inclined to prefer that which is familiar to that which is not.\” (p. 194)

\”In conclusion it should be noted that academic students and professional advertising agents in America have united in applying modern methods of systematic and progressive analysis, observation, experiment, record, and provisional conclusion, in successive cycles to ascertaining the most effective forms of appeal. Psychology has been pressed into the service: the influence which repetition of an advertisement exerts has been subsumed as a special instance of the educative effect of repetition.\” (p. 201)

Having stated the basic tradeoff of advertising, what empirical evidence is available on the point? Ferdinand Rauch describes some of his recent work on \”Advertising and consumer prices\” at the Vox blog. (I saw this study mentioned by Phil Izzo at the Real Time Economics blog.) Rauch points out that the evidence on  how advertising affects prices seems to vary across industries:
 
\”Existing empirical evidence has demonstrated that prices of various goods react to changes in advertising costs differently. For example, advertising seems to decrease prices for eyeglasses (Kwoka 1984), children’s breakfast cereals (Clark 2007) and drugs (Cady 1976), while it increases the supply price in brewing industries (Gallet and Euzent 2002).\”
 
Rauch\’s recent work takes a different approach by looking at what happened in Austria when a situation in which each region imposed its own different tax rate on advertising changed to a situation in which one common national tax rate was imposed on advertising. As a result of this change, the tax rate on advertising rose in some regions and decreased in others. He finds: 

\”I first show that the taxation of advertising is indeed a powerful instrument to restrict advertising expenditures of firms. I also show that advertising increased consumer prices in some industries such as alcohol, tobacco and transportation, in which the persuasive effect dominates. But it also decreased consumer prices in other industries such as food. I use data from existing marketing studies which make it possible to relate different responses of market prices to characteristics of advertisements in industries. I can indeed show that those industries which exhibit the informative price include more information in their advertisements, consistent with the interpretation of informational and persuasive forces of advertising.

\”The aggregate effect is informative, which means that, on average, advertising decreases consumer prices. This suggests that the Austrian advertising tax increases consumer prices and probably affects welfare adversely. I estimate that if the current 5% tax on advertising in Austria were abolished, consumer prices would decrease by about 0.25 percentage points on average.\”

Thus, the challenge for all of us as consumers of advertising is to consume the information that it provides without also swallowing the persuasion that it offers. In addition, whenever I feel annoyed that perhaps advertising has cost me some money, I try to remember that advertising pays essentially all of the production costs for my morning newspaper and for most of the television that I watch.
 
 

The Uncertain Future for Universities

Ernst and Young has produced an interesting report called: \”University of the Future: A thousand year old industry on the cusp of profound change.\” Although the report is aimed specifically at Australian universities, many of the insights apply all around the world. The tone of the report is summed up right at the start:

\”Our primary hypothesis is that the dominant university model in Australia — a broad-based teaching and research institution, supported by a large asset base and a large, predominantly in-house back office — will prove unviable in all but a few cases over the next 10-15 years. At a minimum, incumbent universities will need to significantly streamline their operations and asset base, at the same time as incorporating new teaching and learning delivery mechanisms, a diffusion of channels to market, and stakeholder expectations for increased impact. At its extreme, private universities and possibly some incumbent public universities will create new products and markets that merge parts of the education sector with other sectors, such as media, technology, innovation, and venture capital.\”

Tertiary education is on the rise all around the world. This figure shows participation rates for 18-22 year-olds in tertiary education around the world. Just from 2000 to 2010, the percentage has tripled in China, and more-or-less doubled in India, East Asia and the Pacific, and Latin America. (Side note: \”MENA\” in the table refers to the \”Middle East and North Africa region.)

Finishing a four-year college degree is historically something that happens for well under half of students in high-income countries, and for only a tiny slice of students in low-income countries. With the dramatic expansion of attendance, the traditional model won\’t work well–it\’s just too costly on a per-student basis. How this industry will shake out in world of digital technology and global mobility, along with research programs that are increasingly intertwined with industry, is not at all clear. But the E&Y report offers a glimpse of some of the possibilities. Here are a few thoughts, scattered through the report, that jumped out at me.

\”The likely outcome over the next 10-15 years is the emergence of a small number of elite, truly global university ‘brands’. These global brands of the future will include some of the ‘usual suspects’ — a subset of Ivy League and Oxbridge institutions — as well as a number of elite institutions from China.\”

\”The relationship between industry and the higher education sector is changing and deepening. Industry plays multiple roles: as customer and partner of higher education institutions and, increasingly, as a competitor. … Research commercialisation will go from being a fringe activity to being a core source of funding for many universities’ research programs. …  Finally, industry will increasingly compete with universities in a number of specialist professional programs. Accounting industry bodies already provide a range of specialised postgraduate programs (CPA, CA, CFA etc). Other industry groups, for example engineering associations and pharmacy guilds, may play an increased role as certifiers and deliverers of content.\”

\”Organisations in other knowledge-based industries, such as professional services firms, typically operate with ratios of support staff to front-line staff of 0.3 to 0.5. That is, 2-3 times as many front-line staff as support staff. Universities may not reach these ratios in 10-15 years, but given the ‘hot breath’ of market forces and declining government funding, education institutions are unlikely to survive with ratios of 1.3, 1.4, 1.5 and beyond.\”

\”Use of assets is also an area with scope for much greater efficiency. Most universities own and maintain a sizeable asset base, much of which is used only for four days per week over two 13-week semesters — not much more than 100 days per year.\”

\”Incumbent public universities bring two critical assets to this model: credibility and academic capability. In an age of ubiquitous content, ‘content is king’ no longer applies. Credibility is king — and increasingly ‘curation is king’. Universities are uniquely positioned to bring credibility and to act as curators of content. The challenge for public universities in this world is to cut the right deal — a deal that builds in brand protection and a reasonable share of the value created.\” 

One university vice-chancellor is quoted as saying: \”Our major competitor in ten years time will be Google … if we\’re still alive!\” Another is quoted as saying: \”The traditional university model is the analogue of the print newspaper … 15 years max, you\’ve got the transformation.\” And yet another is quoted as: \”Universities face their biggest challenge in 800 years.\”

I would only add that universities and colleges typically don\’t like to think of themselves as \”businesses,\” but even nonprofit institutions have a \”business model\” in which revenue needs to match up to expenditures.  The higher education business model is going to be dramatically disrupted, and so far, we\’ve only seen the front edge of those changes.

Raising U.S. Exports

The U.S. Census Bureau is out with the monthly trade data for September 2012. The trade deficit is down a bit since August, but not much changed over the last few years. Here\’s the monthly trade picture of imports and exports over the last couple of years. But what I want to focus on here is the possibility of raising U.S. exports as part of helping to regenerate economic growth.

Graph of International Trade Balances

The U.S. GDP was $14.6 trillion in 2010. By World Bank estimates, with the conversions between currencies done at the purchasing power parity exchange rates, the U.S GDP was essentially equal to the sum of the GDP of China ($10.1 trillion) and (India $4.2 trillion). In the two years since then, the sum of China\’s and India\’s economy surely exceeds the U.S. And looking ahead, the annual economic growth rates of China, India, and some other emerging market economies are likely to be a multiple of U.S. growth rates. In much of the last few decades, the U.S. consumer has been driving the world economy with buying and borrowing. But in the next few decades, the emerging economies like China, India, and others will be the drivers of world economic growth.

So how well-prepared is the U.S. economy to find its niche in the global supply chains that are of increasing importance in the world economy? The World Economic Forum offers an answer in \”The Global Enabling Trade Report 2012,\” published last spring. Chapter 1 of the report is called \”Reducing Supply Chain Barriers: The Enabling Trade Index 2012,\” by Robert Z. Lawrence, Sean Doherty, and Margareta Drzeniek Hanouz. They sum up the growing importance of global supply chains in this way: 

\”Traded commodities are increasingly composed of intermediate products. Reductions in transportation and communication costs and innovations in policies and management have allowed firms to operate global supply chains that benefit from differences in comparative advantage among nations, both through international intra-firm trade and through networks that link teams of producers located in different countries. Trade and foreign investment have become increasingly complementary activities. … Increasingly, countries specialize in tasks rather than products. Value is now added in many countries before particular goods and services reach their final destination, and the traditional notion of trade as production in one country and consumption in another is
increasingly inaccurate.\”

Their \”enabling trade\” index ranks countries in how their institutions and infrastructure support global trade, along a number of categories. Here\’s the list of the top 25 countries in their ranking: the U.S. economy ranked 19th in 2010, and dropped to 23rd in the 2012 rankings.

I\’m not especially bothered that the U.S. ranks behind national economies that are massively oriented to international trade, like Singapore and Hong Kong. But ranking behind other large high-income economies like Sweden, Canada, Germany, Japan and France is more troubling. Remember, countries that enable trade are the ones that will be prepared to participate in the main sources of future global economic growth. Here\’s how Lawrence, Doherty, and Hanouz sum up the U.S. situation.

\”Dropping four places, the United States continues its downward trend since the last edition and is ranked 23rd this year. The country’s performance has fallen in international comparison in almost all areas assessed by the Index, bar the efficiency of its border procedures and the availability of logistics services. The regulatory environment appears less conducive to business than in previous years, falling by 10 ranks from 22nd to 32nd. Concerns regarding the protection of property rights, undue influence on government and judicial decisions, and corruption are on the rise. And as in previous years, protection from the threat of terrorism burdens the business sector with very high cost (112th), and US exporters face some of the highest trade barriers abroad. Yet overall the United States continues to benefit from hassle-free import and export procedures (17th) and efficient customs clearance (14th), thanks to excellent customs services to business (3rd). The country also boasts excellent infrastructure, including ICTs, providing a strong basis for enabling trade within the country.\”

The U.S. economy, with its enormous domestic market, has traditionally not had to treat exports and foreign trade as essential to its own growth. But the global economy is changing, and more U.S. producers need to start looking longer and harder beyond their national borders.

Hydraulic Models of the Economy: Phillips, Fisher, Financial Plumbing

Part of the lore of earlier economists as it was passed down to me around the campfire back in the Neolithic era  is the story of how Alban William Housego (Bill) Phillips, the originator of the famous 1956 paper that drew the \”Phillips curve\” tradeoff between unemployment and inflation, also built a hydraulic economic model: that is, a physical model of the economy in which flows of consumption, saving, investment and other economic forces were represented by liquid moving through tubes and pipes. What I hadn\’t known until more recently is that Irving Fisher also created an hydraulic model of the economy.

As a starting point for background on Bill Phillips and his famous 1956 Phillips curve paper, I can recommend the article by A.G. Sleeman called  \”Retrospectives: The Phillips Curve: A Rushed Job?\” which appeared in the Winter 2011 issue of my own Journal of Economic Perspectives. (Like all articles in JEP from the current issue back to the first issue in 1987, it is freely available on-line compliments of the American Economic Association.) The hydraulic computer is not the main focus of Sleeman\’s article, but he provides evidence that it was a major part of Phillips\’ career.

Apparently, Phillips completed his undergraduate degree at the London School of Economics in June 1949, specializing in sociology, and receiving only a \”Pass.\” From there, Sleeman writes (footnotes and references omitted:

\”In 1950, despite his poor degree, Phillips was appointed an Assistant Lecturer in the Department of Economics at LSE at the top of the pay scale, and simultaneously began his Ph.D. studies. The reason was that by 1949, Phillips had built the MONIAC: Monetary National Income Analogue Computer. (The name is a play on ENIAC, the Electronic Numerical Integrator and Computer, which had been announced in 1946 as the fifi rst general-purpose electronic computer.) MONIAC was a hydraulic machine, made of transparent plastic pipes and tanks fastened to a wooden board, about six feet high, four feet wide, and three feet deep. The MONIAC used colored water to represent the stocks and flows of an IS–LM style model and simulated how the model behaved as monetary and fiscal variables varied. The MONIAC brought Phillips to the attention of James Meade and, ultimately, to Lionel Robbins and other members of the LSE economics department. In 1950, Phillips published a paper on MONIAC in the August issue of Economica eight months after failing his Applied Economics and Economic History exams and passing Principles by a single point. In October 1951, the 36 year-old Phillips was promoted to Lecturer and tenured having published only one paper. Over the next two years Phillips completed his doctorate…\”

Phillips was born in New Zealand, and a December 2007 article in Reserve Bank of New Zealand Bulletin discusses the MONIAC. Here\’s a photo of Phillips standing beside the MONIAC at LSE, and a more recent photo of a MONIAC that the New Zealand central bank received from LSE and has restored to working order.

In the New Zealand central bank publication, Tim Ng and Matthew Wright describe the functioning of MONIAC this way: \”Separate water tanks represent households, business, government, exporting and importing sectors of the economy. Coloured water pumped around the system measures income, spending and GDP. The system is programmable and capable of solving nine simultaneous equations in response to any change of the parameters, to reach a new equilibrium. A plotter can record
changes in the trade balance, GDP and interest rates on paper. Simulation experiments with fiscal policy, monetary policy and exchange rates can be carried out. Although the MONIAC was conceived as a teaching tool, it is also capable of generating economic forecasts. Phillips himself
used the MONIAC as a teaching tool at the London School of Economics. Around 14 machines were built …\” For those with an insatiable need to know more about the MONIAC, I recommend this special December 2011 issue of Economia Politica: Journal of Analytical and Institutional Economics, which includes about a dozen articles about the lasting influence of Phillips and the MONIAC.  

But until recently, I hadn\’t known about that Irving Fisher had also built a hydraulic model of the economy as part of his doctoral dissertation back in 1891. I learned about it in the article by Robert W. Dimand and Rebeca Gomez Betancourt in the most recent issue of my own JEP. Their article is primarily focused, as the title notes on \”Irving Fisher’s Appreciation and Interest (1896) and the Fisher Relation.\” But in their capsule overview of Fisher\’s life, they write (citations and footnotes omitted):

\”His 1891 doctoral dissertation in mathematics and political economy (Yale’s first Ph.D. in political economy or economics), which was published as Mathematical Investigations in the Theory of Value and Prices (1892), brought general equilibrium analysis to North America; it was supervised jointly by the physicist and engineer J. Willard Gibb and the economist and sociologist William Graham Sumner. Paul Samuelson once described Fisher (1892) as the “greatest doctoral dissertation in economics ever written” … because Fisher invented general equilibrium analysis for himself before his last minute discovery of the writings of Léon Walras and Francis Ysidro Edgeworth. Fisher’s thesis went beyond these writings in one striking respect: influenced by Gibbs’s work in mechanics, Fisher not only imagined but actually built a hydraulic mechanism to simulate the determination of equilibrium prices and quantities—in effect, a hydraulic computer in the days before electronic computers …\”

Oddly enough, Fisher also wrote a paper that is an early harbinger of the Phillips curve literature. As Dimand and Betancourt write: \”In a series of articles, Fisher correlated distributed lags of price
level changes with economic activity and unemployment. His article “A Statistical Relationship between Unemployment and Price Level Changes” (1926 [1973]), little noticed when first published by the International Labour Office, attracted rather more attention when reprinted almost 50 years later in the Journal of Political Economy as “Lost and Found: I Discovered the Phillips Curve—Irving Fisher.”

I\’m not aware of any working models of Fisher\’s hydraulic computer, nor of any photographs of a working model. But  back in 2000, William C. Brainard and Herbert E. Scarf took on the task of investigating how the model worked in \”How to Compute Equilibrium Prices in 1891.\”  They reprint these sketches of Fisher\’s hydraulic computer from his dissertation. It apparently consisted of a series of cisterns, rods, floats, bellows, and tubes. It represents three consumers and three goods that they consume.

Apparently, Fisher used his hydraulic model of the economy as a teaching tool for 25 years. Brainerd and Scarf write (references omitted):

\”Fisher regarded his model as \”the physical analog of the ideal economic market,\” with the virtue that \”The elements which contribute to the determination of prices are represented each with its appropriate role and open to the scrutiny of the eye …\” providing a \”clear and analytical picture of the interdependence  of the  many elements in the causation of prices … \” Fisher also saw the machine as a way of demonstrating comparative static results, \”… to employ the mechanism as an instrument of investigation and by it, study some complicated variations which could scarcely be successfully followed without its aid. … 

\”Although we do not know what experiments Fisher actually ran with his machine, he does describe eight comparative static exercises. Some of these illustrate basic features of the system, for example that proportional increases in money incomes result in an equal proportional increase of each price, with no change in the allocation of goods. Another simple exercise discussed by Fisher examines whether proportional increases in the endowment of goods necessarily result in proportional decreases in prices, as was apparently, and incorrectly, believed by Mill. Some exercises illustrate less intuitive properties of exchange economies: increasing one individual\’s income may make some other individual better off, and also the possibility of `immiserating growth,\’ i.e. increasing an individuals endowment of a good may actually lower his welfare.\”

The idea of a hydraulic computer seems anachronistic in these days of electronic computation, but I can imagine that an illustrative teaching tool, watching flows of liquid rebalance might be at least as useful as looking at a professor sketching a supply and demand diagram. In addition, the notion of the economy as a hydraulic set of forces still has considerable rhetorical power. We talk about \”liquidity\” and \”bubbles.\” The Federal Reserve publishes \”Flow of Funds\” accounts for the U.S. economy. When economists talk about the financial crisis of 2008 and 2009, they sometime talk in terms of financial \”plumbing.\” For example, here\’s Darrel Duffie:

\”And there has been a lot of progress made, but I do feel that we’re looking at years of work to improve the plumbing, the infrastructure. And what I mean by that are institutional features of how our financial markets work that can’t be adjusted in the short run by discretionary behavior. They’re just there or they’re not. It’s a pipe that exists or it’s a pipe that’s not there. And if those pipes are too small or too fragile and therefore break, the ability of the financial system to serve its function in the macroeconomy … is broken. If not well designed, the plumbing can get broken in any kind of financial crisis if the shocks are big enough. It doesn’t matter if it’s a subprime mortgage crisis or a eurozone sovereign debt crisis. If you get a big pulse of risk that has to go through the financial system and it can’t make it through one of these pipes or valves without breaking it, then the financial system will no longer function as it’s supposed to and we’ll have recession or possibly worse.\”

I find myself wondering about what an hydraulic model of an economy would look like if it also included bubbles, runs on financial institutions, credit crunches–along with tubes that could break. Sounds messy, and potentially quite interesting.

Options for the Deficit from CBO

The Congressional Budget Office has just published a report called \”Choices for Deficit Reduction.\” To me, the bottom line is that coming to grips with the U.S. fiscal situation in the medium run is going to require going outside everyone\’s current comfort zone.

To set the stage, here are the CBO projections for the debt/GDP ratio in the next couple of decades. As I\’ve discussed before on this blog (for example, here), the CBO is required by law to calculate a \”baseline scenario\” which projects the debt under current law. The difficulty with this approach is that current law can incorporate all kinds of future spending cuts or tax increases that aren\’t actually going to happen when the time comes. Thus, the CBO also calculates an \”alternative fiscal scenario,\” which is based on four assumptions:

  • \”That all expiring tax provisions (other than the recent reduction in the payroll tax for Social Security), including tax provisions that expired at the end of December 2011, are extended;
  • That the parameters of the alternative minimum tax (AMT) are indexed to increase with inflation after 2011 (starting from the 2011 exemption amount);
  • That Medicare’s payment rates for physicians’ services are held constant at their current level; and 
  • That provisions of the Budget Control Act of 2011that established automatic enforcement procedures designed to reduce discretionary and mandatory spending beginning in January 2013 do not go into effect, although the law’s original caps on discretionary appropriations remain in place.\”

The \”alternative fiscal scenario\” provides a useful starting point, because you can then look at how changing each of these four assumptions, along with many other policy changes, would affect the debt/GDP over time.Under the alternative scenario, the debt/GDP is headed for the danger zone of about 90-100% debt/GDP by the early 2020s, and then is headed for unimaginably high levels after that.  (For discussions of why that 90-100% ratio is likely to be important, see earlier posts here, here, and here.)

As one way of getting a handle on what needs to be done, the CBO looks at a variety of possible spending and tax policies, and how they would affect the budget deficit that in the alternative fiscal scenario would be about $1 trillion in 2020. Thus, steps that reduce the projected 2020 deficit by $1 trillion would balance the budget, and get the debt/GDP ratio on a declining path. Steps that reduce the projected 2020 deficit by $500 billion would keep the debt/GDP ratio in 2020 about the same as it is now–but the debt/GDP ratio would start rising after that point. An intermediate set of policies that reduce the 2020 deficit by $750 billion would put the long-term trend of the debt/GDP ratio on a slightly declining path–more-or-less what the baseline scenario looks like.

So what do the possible policy options look like? They come in three tables: possibilities for reducing mandatory spending, possibilities for reducing discretionary spending, and possibilities for increasing taxes. You\’ll notice that many of the recommendations come with various footnotes and caveats, which in turn require heading for the actual report. Some of the recommendations overlap in various ways (like different methods of altering Social Security or Medicare), and so you can\’t just add up all the choices. But for a quick sense of the issue, just glance at the right-hand numbers showing the total projected change in the deficit. If you take the intermediate goal of cutting the projected 2020 deficit by $750 billion, most people will very quickly run out of easily palatable options.

For example, repealing the expansion of health insurance coverage and the \”individual mandate\” to buy health insurance from the Affordable Care Act would reduce the 2020 deficit by about $190 billion, which is real money, but not nearly enough (and would leave tens of millions of Americans again without health insurance). Or letting the the tax cuts enacted in 2001, 2003, and 2009 expire only for couples filing joint tax returns with income over $250,000 per year and for single taxpayers with income over $200,000, along with indexing the alternative minimum tax (AMT) for inflation, reduced the 2020 deficit by $110 billion, which is real money, but not nearly enough to do what is needed. So even if Republicans and Democrats would compromise on eliminating Obama\’s health care plan in exchange for eliminating the Bush tax cuts for those above $200,000 or $250,000 per year, they would have less than half of the $750 billion deficit reduction. To put it another way, Americans and U.S. economy are more drunk on deficit spending, and withdrawal is going to be more painful than most people realize.

Here are the three tables of options:

 

Fetal Origins and Epigenetics: Interview with Janet Currie

Douglas Clement has an insightful interview with Janet Currie in the September 2012 issue of The Region magazine published by the Federal Reserve Bank of Minneapolis.  As Currie says near the start of the interview: “Labor economists think a lot about human capital and investments in it. Traditionally, that’s something to do with education, … [b]ut I’m interested in health as human capital as well, and understanding how health and education intersect. … “It is a broad concept, human capital … Not all these different boxes, but an integrated whole.\” The interview makes a number of interesting points about an array of subjects, including how financial incentives affect the practice of medicine ( no matter what doctors say!), early childhood education, and women in the economics profession.

Here, I\’ll focus on one topic: the fetal origins of inequality. Here\’s the question from Clement, and part of Currie\’s answer:

Region: Could you briefly review the fetal origins hypothesis and how economists have expanded its reach—to test scores, education and income as well as health?

Currie: I think the phrase itself was coined by David Barker, a physician who was interested in whether there was a biological mechanism such that if the fetus was starved in utero it would be more likely to be obese or more likely to have heart disease or diabetes, things related to that in later life.
… An infant programmed in this way would then be more likely to gain a lot of weight later on and to have diseases related to obesity…. I believe Thalidomide was the first thing that really shocked people and showed that if you give drugs to the woman, that it could have an effect on the fetus. People were also working on the Dutch “Hunger Winter” prior to Barker, looking into whether being literally starved in utero had long-term effects.

So economists have taken that idea and run with it. Economic studies are examining a wide range of things that might affect fetal health and asking whether they have long-term consequences. I think there’s pretty broad acceptance now of the idea that all kinds of things that happen when people are in utero seem to have a long-term effect.

One of the things I talked about in my Ely lecture was what mechanism might underlie the long term effects, and I raised the idea of “epigenetic” changes as one possibility. The way I like to think about that is you have the gene, which only changes very slowly when you have mutations. But then kind of on top of the gene you have the epigenome, which determines which parts of the gene are expressed. And that can change within one generation. There are animal experiments that do things like change the diet of guinea pigs and all the baby guinea pigs come out a different color. It can be pretty dramatic. …   The idea is that the fetal period might be particularly important because these epigenetic switches are being set one way or another. And then once they’re set, it’s more difficult to change them later on.

I think we haven’t really been able to look at all of the implications of that given the limitations of the data. We don’t have very much data where we can follow people from, say, in utero to some later period. But, that’s where the frontier is, trying to do that kind of research and make those linkages….

I think a really interesting thing about the fetal origins hypothesis for public policy is that if it’s really important what happens to the fetus, and some people think that maybe the first trimester is the most important or the most vulnerable period, then you’re talking about women who might not even know that they’re pregnant. It really means you should be targeting a whole different population than, say, 15 years ago, when we thought, oh, we need to be targeting preschool kids instead of kids once they reach school age. Now we’re kind of pushing it back. Then it was, “We need to be playing Mozart to infants.” Now the implication is that we’ve got to reach these mothers before they even get pregnant if we really want to improve conditions.

Epigenetics implies that it does not make sense to talk about nature versus nurture. If nature is the gene and nurture is the thing that sets the switches, then the outcome depends on both of those things. So you can’t really talk about nature or nurture in most situations. It has to be some combination of both. …

One thing that is interesting—and I’m starting to do a little bit of work like this myself—is thinking about children in developing countries. Things we’re looking at here in the United States, like the effects of in utero exposure to pollution on child health and economic outcomes, involve problems that are much worse in developing countries. So if we can find an effect here … for instance, my E-ZPass paper suggested that the incidence of low birth weight was 8 percent higher for pregnant women who are subjected to large amounts of auto exhaust because they live near highway toll plazas. If that is true here, then what must be the effect in Beijing? It must be even bigger than that.

Currie, along with Douglas Almond wrote on this topic in the Summer 2011 issue of my own Journal of Economic Perspectives in \”Killing Me Softly: The Fetal Origins Hypothesis\” (25(3): 153-72). That article, like all articles in JEP back to the first issue in 1987, is freely available to all courtesy of the American Economic Association. Another useful starting point to this literature is Currie\’s Ely lecture on this topic (mentioned in passing in her answer above): \”Inequality at Birth: Some Causes and Consequences.\” American Economic Review, 101(3): 1-22. The AER is not freely available on-line, but many in academia will have on-line access through AEA memberships or library subscriptions.

 For yet another recent angle, there\’s an article in a recent Economist magazine on \”Epigenetics and Health: Grandma\’s curse.\” The big question is whether an acquired characteristic–like, say, asthma that is caused by heavy smoking–can be inherited. A plain-vanilla theory of inheritance says this isn\’t possible, because smoking is bad for you, but it doesn\’t alter your genes. However, studies on pregnant rats apparently show that if a first generation of rats is dosed with nicotine, which leads to asthma, then the first generation of offspring also has a propensity to asthma. That result is unsurprising, because it\’s just the fetal origins hypothesis in action. But apparently, the next generation of rats after that also has a greater propensity to asthma, although that generation was never expose to nicotine directly. The epigenetic explanation is that nicotine doesn\’t change genes, but it can alter the \”switches\” that determine what characteristics are expressed by those genes, and those different \”switches\” can be to some extent inherited. As the article writes: [T]hose epigenetic changes that are inherited seem to be subsequently reversible. But the idea that acquired characteristics can be inherited at all is still an important and novel one …\”

Fall 2012 Journal of Economic Perspectives

The Fall 2012 issue of my own Journal of Economic Perspectives is now freely available on-line. Actually, all issues of JEP back to the first issue in 1987 are freely available on-line, compliments of the American Economic Association.

The issue starts with a three-paper symposium on the issue of  \”contingent valuation,\” which is whether it makes sense to use survey techniques to estimate the costs of events like the Exxon Valdez oil spill in 1989 or the BP oil spill in 2010. The symposium has an overview paper laying out the issues, followed by pro and con viewpoints. Next comes a five-paper symposium on various aspects of China\’s economy: labor markets, macroeconomic imbalances, and perspectives on patterns of long-term growth. The final three papers are about the Clark medal awarded to Amy Finkelstein, about Irving Fisher\’s famous 1896 paper, and a \”Recommendations for Further Reading\” that I contribute to each issue.

Symposium on Contingent Valuation

\”From Exxon to BP: Has Some Number Become Better Than No Number?\” by Catherine L. Kling, Daniel J. Phaneuf and Jinhua Zhao

On March 23, 1989, the Exxon Valdez ran aground in Alaska\’s Prince William Sound and released over 250,000 barrels of crude oil, resulting in 1300 miles of oiled shoreline. The Exxon spill ignited a debate about the appropriate compensation for damages suffered, and among economists, a debate concerning the adequacy of methods to value public goods, particularly when the good in question has limited direct use, such as the pristine natural environment of the spill region. The efficacy of stated preference methods generally, and contingent valuation in particular, is no mere academic debate. Billions of dollars are at stake. An influential symposium appearing in this journal in 1994 provided arguments for and against the credibility of these methods, and an extensive research program published in academic journals has continued to this day. This paper assesses what occurred in this academic literature between the Exxon spill and the BP disaster. We will rely on theoretical developments, neoclassical and behavioral paradigms, empirical and experimental evidence, and a clearer elucidation of validity criteria to provide a framework for readers to ponder the question of the validity of contingent valuation and, more generally, stated preference methods.

\”Contingent Valuation: A Practical Alternative When Prices Aren\’t Available,\” by Richard T. Carson
A person may be willing to make an economic tradeoff to assure that a wilderness area or scenic resource is protected even if neither that person nor (perhaps) anyone else will actually visit this area. This tradeoff is commonly labeled \”passive use value.\” Contingent valuation studies ask questions that help to reveal the monetary tradeoff each person would make concerning the value of goods or services. Such surveys are a practical alternative approach for eliciting the value of public goods, including those with passive use considerations. First I discuss the Exxon Valdez oil spill of March 1989, focusing on why it is important to measure monetary tradeoffs for goods where passive use considerations loom large. Although discussions of contingent valuation often focus on whether the method is sufficiently reliable for use in assessing natural resource damages in lawsuits, it is important to remember that most estimates from contingent valuation studies are used in benefit–cost assessments, not natural resource damage assessments. Those working on benefit–cost analysis have long recognized that goods and impacts that cannot be quantified are valued, implicitly, by giving them a limitless value when government regulations preclude certain activities, or giving them a value of zero by leaving certain consequences out of the analysis. Contingent valuation offers a practical alternative for reducing the use of either of these extreme choices. I put forward an affirmative case for contingent valuation and address a number of the concerns that have arisen.
\”Contingent Valuation: From Dubious to Hopeless,\” by Jerry Hausman
Approximately 20 years ago, Peter Diamond and I wrote an article for this journal analyzing contingent valuation methods. At that time Peter\’s view was that contingent valuation was hopeless, while I was dubious but somewhat more optimistic. But 20 years later, after millions of dollars of largely government-funded research, I have concluded that Peter\’s earlier position was correct and that contingent valuation is hopeless. In this paper, I selectively review the contingent valuation literature, focusing on empirical results. I find that three long-standing problems continue to exist: 1) hypothetical response bias that leads contingent valuation to overstatements of value; 2) large differences between willingness to pay and willingness to accept; and 3) the embedding problem which encompasses scope problems. The problems of embedding and scope are likely to be the most intractable. Indeed, I believe that respondents to contingent valuation surveys are often not responding out of stable or well-defined preferences, but are essentially inventing their answers on the fly, in a way which makes the resulting data useless for serious analysis. Finally, I offer a case study of a prominent contingent valuation study done by recognized experts in this approach, a study that should be only minimally affected by these concerns but in which the answers of respondents to the survey are implausible and inconsistent.
Symposium on China\’s Economy

\”The End of Cheap Chinese Labor,\” by Hongbin Li, Lei Li, Binzhen Wu and Yanyan Xiong

In recent decades, cheap labor has played a central role in the Chinese model, which has relied on expanded participation in world trade as a main driver of growth. At the beginning of China\’s economic reforms in 1978, the annual wage of a Chinese urban worker was only $1,004 in U.S. dollars. The Chinese wage was only 3 percent of the average U.S. wage at that time, and it was also significantly lower than the wages in neighboring Asian countries such as the Philippines and Thailand. The Chinese wage was also low relative to productivity. However, wages are now rising in China. In 2010, the annual wage of a Chinese urban worker reached $5,487 in U.S. dollars, which is similar to wages earned by workers in the Philippines and Thailand and significantly higher than those earned by workers in India and Indonesia. China\’s wages also increased faster than productivity since the late 1990s, suggesting that Chinese labor is becoming more expensive in this sense as well. The increase in China\’s wages is not confined to any sector, as wages have increased for both skilled and unskilled workers, for both coastal and inland areas, and for both exporting and nonexporting firms. We benchmark wage growth to productivity growth using both national- and industry-level data, showing that Chinese labor was kept cheap until the late 1990s but the relative cost of labor has increased since then. Finally, we discuss the main forces that are pushing wages up.
\”Labor Market Outcomes and Reforms in China,\” by Xin Meng
Over the past few decades of economic reform, China\’s labor markets have been transformed to an increasingly market-driven system. China has two segregated economies: the rural and urban. Understanding the shifting nature of this divide is probably the key to understanding the most important labor market reform issues of the last decades and the decades ahead. From 1949, the Chinese economy allowed virtually no labor mobility between the rural and urban sectors. Rural-urban segregation was enforced by a household registration system called \”hukou.\” Individuals born in rural areas receive \”agriculture hukou\” while those born in cities are designated as \”nonagricultural hukou.\” In the countryside, employment and income were linked to the commune-based production system. Collectively owned communes provided very basic coverage for health, education, and pensions. In cities, state-assigned life-time employment, centrally determined wages, and a cradle-to-grave social welfare system were implemented. In the late 1970s, China\’s economic reforms began, but the timing and pattern of the changes were quite different across rural and urban labor markets. This paper focuses on employment and wages in the urban labor markets, the interaction between the urban and rural labor markets through migration, and future labor market challenges. Despite the remarkable changes that have occurred, inherited institutional impediments still play an important role in the allocation of labor; the hukou system remains in place, and 72 percent of China\’s population is still identified as rural hukou holders. China must continue to ease its restrictions on rural–urban migration, and must adopt policies to close the widening rural-urban gap in education, or it risks suffering both a shortage of workers in the growing urban areas and a deepening urban-rural economic divide.
\”Understanding China\’s Growth: Past, Present, and Future,\” by Xiaodong Zhu
The pace and scale of China\’s economic transformation have no historical precedent. In 1978, China was one of the poorest countries in the world. The real per capita GDP in China was only one-fortieth of the U.S. level and one-tenth the Brazilian level. Since then, China\’s real per capita GDP has grown at an average rate exceeding 8 percent per year. As a result, China\’s real per capita GDP is now almost one-fifth the U.S. level and at the same level as Brazil. This rapid and sustained improvement in average living standard has occurred in a country with more than 20 percent of the world’s population so that China is now the second-largest economy in the world. I will begin by discussing briefly China\’s historical growth performance from 1800 to 1950. I then present growth accounting results for the period from 1952 to 1978 and the period since 1978, decomposing the sources of growth into capital deepening, labor deepening, and productivity growth. But the main focus of this paper will be to examine the sources of growth since 1978, the year when China started economic reform. Perhaps surprisingly, given China\’s well-documented sky-high rates of saving and investment, I will argue that China’s rapid growth over the last three decades has been driven by productivity growth rather than by capital investment. I also examine the contributions of sector-level productivity growth, and of resource reallocation across sectors and across firms within a sector, to aggregate productivity growth. Overall, gradual and persistent institutional change and policy reforms that have reduced distortions and improved economic incentives are the main reasons for the productivity growth.
\”Aggregate Savings and External Imbalances in China,\” by Dennis Tao Yang
Over the last decade, the internal and external macroeconomic imbalances in China have risen to unprecedented levels. In 2008, China\’s national savings rate soared to over 53 percent of its GDP, whereas its current account surplus exceeded 9 percent of GDP. This paper presents a unified framework for understanding the structural causes of these imbalances. I argue that the imbalances are attributable to a set of policies and institutions embedded in the economy. I propose a unified framework for understanding the joint causes of the high savings rate and external imbalances in China. My explanations first focus on an array of factors that encouraged saving across the corporate, government, and household sectors, such as policies that affected sectoral income distribution, along with factors like incomplete social welfare reforms, and population control policies. I then turn to policies that limited investment in China, thus preventing the high savings from being used domestically. Finally, I will examine how trade policies, such as export tax rebates, special economic zones, and exchange rate policies, strongly promote exports. Moreover, the accession of China to the World Trade Organization has dramatically amplified the effects of these structural distortions. In conclusion, I recommend some policy reforms for rebalancing the Chinese economy.
\”How Did China Take Off?\” by Yasheng Huang
There are two prevailing perspectives on how China took off. One emphasizes the role of globalization—foreign trade and investments and special economic zones; the other emphasizes the role of internal reforms, especially rural reforms. Detailed documentary and quantitative evidence provides strong support for the second hypothesis. To understand how China\’s economy took off requires an accurate and detailed understanding of its rural development, especially rural industry spearheaded by the rise of township and village enterprises. Many China scholars believe that township and village enterprises have a distinct ownership structure—that they are owned and operated by local governments rather than by private entrepreneurs. I will show that township and village enterprises from the inception have been private and that China undertook significant and meaningful financial liberalization at the very start of reforms. Rural private entrepreneurship and financial reforms correlate strongly with some of China\’s best-known achievements—poverty reduction, fast GDP growth driven by personal consumption (rather than by corporate investments and government spending), and an initial decline of income inequality. The conventional view of China scholars is right about one point—that today\’s Chinese financial sector is completely state-controlled. This is because China reversed almost all of its financial liberalization sometime around the early to mid 1990s. This financial reversal, despite its monumental effect on the welfare of hundreds of millions of rural Chinese, is almost completely unknown in the West.

Other Articles

\”Amy Finkelstein: 2012 John Bates Clark Medalist,\” by Jonathan Levin and James Poterba

Amy Finkelstein is the 2012 recipient of the John Bates Clark Medal from the American Economic Association. The core concerns of Amy\’s research program have been insurance markets and health care. She has addressed whether asymmetric information leads to inefficiencies in insurance markets, how large social insurance programs affect healthcare markets, and the determinants of innovation incentives in health care. We describe a number of Amy\’s key research contributions, with particular emphasis on those identified by the Honors and Awards Committee of the American Economic Association in her Clark Medal citation, as well as her broader contributions to the field of economics.
\”Retrospectives: Irving Fisher\’s Appreciation and Interest (1896) and the Fisher Relation,\” by Robert W. Dimand and Rebeca Gomez Betancourt
Irving Fisher\’s monograph Appreciation and Interest (1896) proposed his famous equation showing expected inflation as the difference between nominal interest and real interest rates. In addition, he drew attention to insightful remarks and numerical examples scattered through the earlier literature, and he derived results ranging from the uncovered interest arbitrage parity condition between currencies to the expectations theory of the term structure of interest rates. As J. Bradford DeLong wrote in this journal (Winter 2000), \”The story of 20th century macroeconomics begins with Irving Fisher\” and specifically with Appreciation and Interest because \”the transformation of the quantity theory of money into a tool for making quantitative analyses and predictions of the price level, inflation, and interest rates was the creation of Irving Fisher.\” I discuss the message of Appreciation and Interest, and assess how original he was.
\”Recommendations for Further Reading,\” by Timothy Taylor

Should Voting be Compulsory?

Just to put my cards face up on the table right here at the start, I\’m not in favor of compulsory voting. But I think the case for doing so is stronger than commonly recognized. Let me lay out the arguments as I see them:  low turnover, what the penalties look like in some other countries for not voting,  the free speech/constitutional issues, and whether any resulting differences in outcomes would be desirable.

The starting point for making it compulsory to vote begins with the (arguable) notion that democracy would be better-served if participation in elections was higher. Here\’s a figure from a post of mine a couple of months ago on \”Voter Turnout Since 1964.\”  With some variation across age groups, voter turnout in presidential elections has been sagging over the last few decades.

Some nations have responded to concerns over low voter turnout by passing laws that make it a requirement to vote. Here\’s a list of countries with such laws, and the penalties that they impose for not voting, taken from a June 2006 report from Britain\’s Electoral Commission. The penalties are categorized from  \”Very Strict\” to \”None.\”   But honestly, even the \”Very Strict\” is not especially onerous.


 In talking with people on this subject, I\’ve found that one immediate response is that that compulsory voting must be a violation of freedom or free speech in some way. I have some of this reaction myself. But while one may reasonably oppose the idea of compulsory voting, the case that it violates a specific law or constitutional right is difficult to make. Indeed, the original 1777 constitution of the state of Georgia specifically called for a potential penalty of five pounds for not voting–although it also allowed an exception for those with a good explanation. If the U.S. government can require you to pay money for taxes, or compel you to serve on jury duty, or institute a military draft, it probably has the power to require that you show up and vote. Of course, a compulsory voting law would almost certainly include provisions for conscientious objectors to voting, and you would be permitted to turn in a totally blank ballot if you wish.  The penalties for not voting would be an inconvenience, but far from draconian.

For a review of the various legal and constitutional ins and outs of compulsory voting, along with some of the practical arguments, I recommend this anonymous 2007 note in the Harvard Law Review, called \”The Case for Compulsory Voting.\”

The author points out (footnotes omitted): \”Approximately twenty-four nations have some kind of compulsory voting law, representing 17% of the world’s democratic nations. The effect of compulsory voting laws on voter turnout is substantial. Multivariate statistical analyses have shown that compulsory voting laws raise voter turnout by seven to sixteen percentage points.\”

The anonymous author also offers what seem to me ultimately the two strongest arguments for compulsory voting. The first argument is that a larger turnout will (arguably) provide a more accurate representation of what the public wants, and in that sense will strengthen the bond between the electorate and its elected representatives. The second and more subtle argument is that compulsory voting would mean that political parties could focus much less on voter turnout. Less money and effort could go into turning out the vote, and more into persuasion.  Those who now vote almost certainly have stronger partisan feelings, on average, than those who don\’t vote. So politicians aim their advertisements and strategies at that more partisan group. Many negative campaign ads attempt to reduce turnout for a candidate: if turnout was high, the usefulness of such negative ads could be diminished. A broader spectrum of voters would push candidates to offer a broader spectrum of messages to appeal to those voters, and groups that now have low turnout would find themselves equally courted by politicians.

The question becomes whether these potential benefits to the democracy as a whole are worth the imposition of compulsory voting. The anonymous writer in the Harvard Law Review offers what is surely meant to be an attention-grabbing and paradoxical-sounding conclusion: \”Although there are several legal obstacles to compulsory voting, none of them appear to be substantial enough to bar compulsory voting laws. … The biggest obstacle to compulsory voting is the political reality that compulsory voting seems incompatible with many Americans’ notions of individual liberty. As with many other civic duties, however, voting is too important to be left to personal choice.\”

How might one respond to these arguments? Perhaps the most obvious answer is that if one looks at the countries that have compulsory voting–say, Brazil, Australia, Peru, Thailand–it\’s not obvious that their politics are characterized by greater appeals to the nonpartisan middle, or that the bond between the population and its elected representatives is especially strong.

For a more detailed deconstruction , I recommend a 2009 essay by Annabelle Lever in Public Reason magazine, \”Is Compulsory Voting Justified?\”  Basically, her argument comes down to a belief that the potential gains from compulsory voting are unproven and unsupported by evidence in countries that have tried it, while the lost freedom from compulsory voting would be definite and real.

In Lever\’s view, the evidence that exists doesn\’t show that political parties start competing for the middle in a different way, nor that outcomes are different. For example, northern European social democratic countries like Sweden don\’t have compulsory voting, and do have declining voter turnout.
 f people are disinterested or disillusions and don\’t want to vote for the existing candidates, it\’s not clear that threatening them with a criminal offense for not voting will build connections from the population to elected representatives. If political parties don\’t need to focus on turnout, they will immediately turn to other ways of identifying swing groups and wedge issues.  The penalties for not voting may not look large in some broad sense, but be clear: when we enter the realm of compulsory voting, we are talking about criminal behavior. will need to decide how large the fines or other penalties will be, and what happens to those (and there will be some!) who refuse to pay. If not voting is a crime, we will be making a lot of people into criminals–maybe guilty of only a minor crime, but still recorded in our information-technology society as breaking the law. It is by no means clear that having a right to vote should be reinterpreted as having a legal duty to vote: there are many rights that one may choose to exercise, or not, as one prefers. In a free society, the right to be left alone has some value, too. Lever concludes:

\”I have argued that the case for compulsory voting is unproven. It is unproven because the claim that compulsion will have beneficial results rests on speculation about the way that nonvoters will vote if they are forced to vote, and there is considerable, and justified, controversy on this matter. Nor is it clear that compulsory voting is well-suited to combating those forms of low and unequal turnout that are, genuinely, troubling. On the contrary, it may make them worse by distracting politicians and voters from the task of combating persistent, damaging, and pervasive forms of unfreedom and inequality in our societies.

\”Moreover, I have argued, the idea that compulsory voting violates no significant rights or liberties is mistaken and is at odds with democratic ideas about the proper distribution of power and responsibility in a society. It is also at odds with concern for the politically inexperienced and alienated, which itself motivates the case for compulsion. Rights to abstain, to withhold assent, to refrain from making a statement, or from participating, may not be very glamorous, but can be nonetheless important for that. They are necessary to protect people from paternalist and authoritarian government, and from efforts to enlist them in the service of ideals that they do not share. Rights of non-participation, no less than rights of anonymous participation, enable the weak, timid and unpopular to protest in ways that feel safe and that are consistent with their sense of duty, as well as self-interest. … People must, therefore, have rights to limit their participation in politics and, at the limit, to abstain, not simply because such rights can be crucial to prevent coercion by neighbours, family, employers or the state, but because they are necessary for people to decide what they are entitled to do, what they have a duty to do, and how best to act on their respective duties and rights.\”

I don\’t know of any recent polls on how Americans feel about compulsory voting, but a 2004 poll by ABC News found 72% opposed–a slightly higher percentage than a poll taken 40 years earlier on the same subject. These kinds of results from nationally representative polls pose an additional level of irony. If Americans as a group are strongly opposed to laws that would require compulsory voting, it seems problematic to glide around this opposition into an argument that, really, although they don\’t know it yet, they would feel better off with compulsory voting.

In a 2004 essay on compulsory voting (in this volume), Maria Gratschew points out that a number of countries in western Europe that used to have compulsory voting have have moved away from it in recent decades: Austria, Italy, Greece, and Netherlands. In discussing the decision by Netherlands to drop its compulsory voting laws in 1967, Gratschew writes: \”A number of theoretical as well as practical arguments were put forward by the committee: for example, the right to vote is each citizen\’s individual right which he or she should be free to exercise or not; it is difficult to enforce sanctions against non-voters effectively; and party politics might be livelier if the parties had to attract the voters\’ attention, so that voter turnout would therefore reflect actual participation and interest in politics.\”

Compulsory voting is one of those intriguing roads that looks better when not actually traveled.

Minimum Wage to $9.50? $9.80? $10?

During the 2008 campaign, President Obama promised to raise the minimum wage to $9.50/hour by 2011. This pledge was made at a time when the economic slowdown was already underway: the recession started in December 2007. The pledge was also made at a time when an increase in the minimum wage was already underway: in May 2007, President Bush has signed into law an increase in the minimum wage, to rise in several stages from $5.15 to $7.25 in July 2009.

Last summer, some Democratic Congressmen tried to push the issue a bit. In June, 17 House Democrats signed on as co-sponsors of a bill authored by Rep. Jesse Jackson Jr. of Illinois for an immediate rise in the minimum wage to $10/hour–and then to index it to inflation in the future. In July, over 100 Democrats in the House of Representatives signed on as co-sponsors of a bill authored by Rep. George Miller of California to raise the federal minimum wage to $9.80/hour over the next three years–and then to index it to inflation after that point. But while raising the minimum wage was a hot issue in the years before Bush signed the most recent increases into law, these calls for a still-higher minimum wage got little attention.

For background, here are a couple of graphs about the U.S. minimum wage. The first graph shows the nominal minimum wage over time, and also the real minimum wage adjusted to 2011 dollars. In real terms, the increase in the minimum wage from 2007 to 2009 didn\’t quite get it back to the peak levels of the late 1960s, but did return it to the levels of the early 1960s and most of the 1970s–as well as above the levels that prevailed during much of the 1980s and 1990s. The second graph shows the minimum wage as a share of the median wage for several countries, using OECD data. The U.S. has the lowest ratio of minimum wage/median wage–and given the greater inequality of the U.S. income distribution, the U.S. ratio would look lower still if compared to average wages. However, because of the rise from 2007-2009, the U.S. economy has experienced the largest rise in the minimum wage from 2006-2011. (Thanks to Danlu Hu for producing these graphs.)

So why didn\’t calls for a higher minimum wage in summer 2012 get more political traction? 

1) The unemployment rate in May 2007 was 4.4%, and had been below 5% for 18 months. The unemployment rate last summer was around 8.2%, and had  been above 8% for more than 40 months. Thus, there was a lot less reason in May 2007 to worry about the risk that a higher minimum wage might reduce the number of jobs for unskilled labor than their was in summer 2012.

2) In summer 2012, average wage increases had not been looking good for most workers for several years, which made raising the minimum wage seem less appealing as a matter of fairness.

3) The increase in the minimum wage that President Bush signed into law that took effect from 2007 to 2009 made it feel less urgent to raise the minimum wage still further.

4) Some states have set their own minimum wages, at a level above the U.S. minimum wage. The U.S. Department of Labor has a list of state minimum wage laws here: for example, California has a minimum wage of $8/hour and Illinois has a minimum wage of $8.25/hour. Thus, at least some of those jurisdictions who favor a higher minimum wage are getting to have it.

5) In summer 2012, the Democratic establishment was focused on re-electing President Obama, and since raising the minimum wage was not part of his active agenda, it gave no publicity or support to the calls for a higher minimum wage.

6) In the academic world, there was a knock-down, drag-out scrum about the minimum wage going through much of the 1990s. David Card and Alan Krueger published a much-cited paper in 1994 in the American Economic Review, comparing minimum wage workers in New Jersey and Pennsylvania, and found that the different minimum wages across states had no effect on employment levels. (“Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania.”American Economic Review, September 1994, 84(4), pp. 772–93.). This conclusion was heavily disputed, and for those who want to get their hands dirty, the December 2000 issue of the American Economic Review had 30+ pages of critique of the Card-Krueger paper and 30+ pages of response. I won\’t seek to mediate that dispute here. But I think that the academics who were driving the arguments had sort of exhausted themselves by the time the 2007 legislation passed, and no one seemed to be slavering for a rematch.

I was lukewarm on the rise in the minimum wage that was enacted in 2007. It seems to me that there are better ways to help low-wage workers. But that said, if the minimum wage isn\’t very far above the market wage for unskilled labor (and in some places may even be below the market wage for unskilled labor), there\’s no reason to believe that it will have large effects on employment. However, raising the minimum wage further to the range of $9.50/hour or $10/hour would in many parts of the country push well above the prevailing wage for unskilled labor, especially in a still-weak economy, and so the effects on employment would be more deleterious.

I tried to explain some of the other policy issues raised by a higher minimum wage in my book The Instant Economist: Everything You Need to Know About How the Economy Works, published earlier this year by Penguin Plume.

\”Here’s an insight for opponents of a higher minimum wage to mull over: Let’s say a 20 percent rise in the minimum wage leads to 4 percent fewer jobs for low-skilled workers (as some of the evidence suggests). But this also implies that a higher minimum wage leads to a pay raise for 96 percent of low-skilled workers. Many people in low-skill jobs don’t have full-time, year-round jobs. So perhaps these workers work 4 percent fewer hours in a year, but they get 20 percent higher pay for the hours they do work. In this scenario, even if the minimum wage reduces the number of jobs or the number of hours available, raising it could still make the vast majority of low-skilled workers better off, as they’d work fewer hours at a higher wage.
\”There’s another side to the argument, however. The short-term costs to an individual of not being able to find a job are quite large, while the benefits of slightly higher wages are (relatively speaking) somewhat smaller, so the costs to the few who can’t find jobs because of a higher minimum wage may be in some sense more severe than the smaller benefits to individuals who are paid more. Those costs of higher unemployment are also unlikely to be spread evenly across the economy; instead, they are likely to be concentrated in communities that are already economically disadvantaged. Also, low-skill jobs are often entry-level jobs. If low-skill jobs become less available, the bottom rung on the employment ladder becomes less available to low-skilled workers. Thus, higher minimum wages might offer modest gains to the substantial number of low-skilled workers who get jobs, but impose substantial economic injury on those who can’t.
\”There are alternatives to price floors, and economists often tend to favor such alternatives because they work with the forces of supply and demand. For example, if a government wants to boost wages for low-skilled workers, it could invest in skills-training programs. This would enable some of those workers to move into more skills-driven (and better paying) positions and would lower the supply of low-skilled labor, driving up their wages as well. The government could subsidize firms that hire low-skilled workers, enabling the firms to pay them a higher wage. Or it could subsidize the wages of low-skilled workers directly through programs such as the Earned Income Tax Credit, which provides a tax break to workers whose income is below a certain threshold. This policy increases the workers’ net income without placing any financial burden on the employers.\”

What I didn\’t point out in the book is the political dynamic that raising the minimum wage allows politicians to pretend that they are helping people at zero cost–because the costs don\’t appear as taxes and spending. But pushing up the minimum wage substantially now, after the recent increases and in a still-struggling economy, does not strike me as wise policy.

Addendum: Thanks to reader L.S. who let me know that my argument here–a minimum wage law can play a useful redistribution function under certain labor market assumptions, but in general it is better for the government to move to a lower minimum wage and higher government support for low-wage workers–is quite similar to the more formal case made by David Lee and Emmanuel Saez in their recent Journal of Public Economics article, \”Optimal minimum wage policy in competitive labor markets.\”

Economics and Natural Disasters

In the aftermath of Hurricane Sandy, many teachers and students of economics will find themselves searching for background materials that provide some background in the economics of natural disasters. Here are a few examples from the last few years.

David Stromberg laid out the economic arguments about natural disasters in \”Natural Disasters, Economic Development, and Humanitarian Aid,\” appearing in the Summer 2007 issue of my own Journal of Economic Perspectives. (This article, like all JEP articles back to the start of the journal 1987, is freely available to all courtesy of the American Economic Association.) Stromberg makes the fundamental point that the economic analysis of natural disasters is built on three factors: the incidence of the natural disasters themselves, the number of people exposed to the disaster, and the vulnerability of the population to that disaster. In fact, Stromberg traces this distinction back to letters between Voltaire and Rousseau in the aftermath of the great Lisbon earthquake of 1755. Voltaire had written a poem on how terrible the earthquake was; Rousseau had responded by pointing out that it was not the quake, but the interaction between human society and the quake, which was at issue. Here\’s Stromberg (footnotes and citations omitted):

\”[I]n 1755 an earthquake devastated Lisbon, which was then Europe’s fourth-largest city. At the first quake, fissures five meters wide appeared in the city center. The waves of the subsequent tsunami engulfed the harbor and downtown. Fires raged for days in areas unaffected by the tsunami. An estimated 60,000 people were killed, out of a Lisbon population of 275,000. In a letter to Voltaire dated August 18, 1756, Jean-Jacques Rousseau notes that while the earthquake was an act of nature, previous acts of men, like housing construction and urban residence patterns, set the stage for the high death toll. Rousseau wrote: “Without departing from your subject of Lisbon, admit, for example, that nature did not construct twenty thousand houses of six to seven stories there, and that if the inhabitants of this great city had been more equally spread out and more lightly lodged, the damage would have been much less and perhaps of no account.”

\”Following Rousseau’s line of thought, disaster risk analysts distinguish three factors contributing to a disaster: the triggering natural hazard event (such as the earthquake striking in the Atlantic Ocean outside Portugal); the population exposed to the event (such as the 275,000 citizens of Lisbon); and the vulnerability of that population (higher for the people in seven-story buildings).\”

Of course, this insight implies that what events are classified as a \”natural disaster\” is not just about size of the natural event, but about how many people are affected. Thus, the WHO Collaborating Centre for Research on the Epidemiology of Disasters (CRED) maintains an Emergency Events DatabaseCenter that colleges data on natural disasters, where a disaster is defined as 10 or more people reported killed, 100 or more people reported affected, a declaration of a state of emergency, or a call for international assistance. Here are some of their figures showing global trends in natural disasters from 1975 through 2011. The first graph shows the number of such disasters over time: the total was rising into the early 2000s, but has leveled off since then.

The second and third graph show the number of people killed, and the number of people affected by such disasters. The trendline for number of people killed has been dropping over time, with occasional spikes: in 2010,  the earthquake in Haiti, or in 2007, the cyclone that hit Myanmar and the major earthquake in China. However, the number of people affected by natural disasters is rising over time, which one would expect as a result of growing population levels, if nothing else.

Finally, the fourth graph shows monetary losses from natural disasters. Of course, this graph is driven by whether the disasters hit high-income or middle-income countries, where the measured economic costs of damage are higher than in low-income countries.

The best way of dealing with natural disasters is often before they occur: early warning systems, advance planning, encouraging natural protections like minimizing deforestation or protecting wetlands, building codes, flood control, and more. For a nice overview of such efforts around the world, I recommend the 2010 report on \”Natural Hazards, UnNatural Disasters: The Economics of Effective Prevention,\” from the World Bank. The report begins: \”The adjective “UnNatural” in the title of this report conveys its key message: earthquakes, droughts, floods, and storms are natural hazards, but the unnatural disasters are deaths and damages that result from human acts of omission and commission. Every disaster is unique, but each exposes actions—by individuals and governments at different levels—that, had they been different, would have resulted in fewer deaths and less damage. Prevention is possible, and this report examines what it takes to do this cost-effectively.\”

The World Bank report is focused more on low-income countries, but similar lessons about prevention apply to high-income countries, as well. In the New York Times on Tuesday, David W. Chen and Mireya Navarro discuss \”For Years, Warnings That It Could Happen Here.\” They talk about proposals that have been floating around the New York metro area for years now about levy systems, storm surge barriers, floodgates in subways,  moving people and economic activity away from low-lying areas, and in general having plans in place. Not many storms will pack the wallop of Hurricane Sandy, but New York City is a huge agglomeration of people living on a coastline who will inevitably be susceptible to storm and flood damage.

Two other quick references: First, the National Flood Insurance Program will almost certainly not have the money to pay for the damage from Hurricane Sandy. For background on that program and how it works, and why its inability to fund these damages was completely predictable, Erwann O.  Michel-Kerjan, lays it out in \”Catastrophe Economics: The National Flood Insurance Program,\” in the Fall 2010 issue of my own Journal of Economic Perspectives. 

Second, the aftermath of natural disasters is often a motivation for teachers of economics to discuss the extent to which, even if price restrictions don\’t make economic sense most of the time, they might be justifiable in the aftermath of a natural disaster to prevent \”price-gouging.\” Michael Giberson of Texas Tech University a nice readable essay on \”The Problem with Price Gouging Laws\” in the Spring 2011 issue of Regulation magazine. I blogged about the article here. He points out that 31 states have such laws, and that the completely predictable problems with such laws are that they
discourage bringing supplies into disaster areas, they discourage conserving on key resources, they concentrate economic losses on local merchants, and they worsen the economic losses in the disaster area.