Should Voting be Compulsory?

Just to put my cards face up on the table right here at the start, I\’m not in favor of compulsory voting. But I think the case for doing so is stronger than commonly recognized. Let me lay out the arguments as I see them:  low turnover, what the penalties look like in some other countries for not voting,  the free speech/constitutional issues, and whether any resulting differences in outcomes would be desirable.

The starting point for making it compulsory to vote begins with the (arguable) notion that democracy would be better-served if participation in elections was higher. Here\’s a figure from a post of mine a couple of months ago on \”Voter Turnout Since 1964.\”  With some variation across age groups, voter turnout in presidential elections has been sagging over the last few decades.

Some nations have responded to concerns over low voter turnout by passing laws that make it a requirement to vote. Here\’s a list of countries with such laws, and the penalties that they impose for not voting, taken from a June 2006 report from Britain\’s Electoral Commission. The penalties are categorized from  \”Very Strict\” to \”None.\”   But honestly, even the \”Very Strict\” is not especially onerous.


 In talking with people on this subject, I\’ve found that one immediate response is that that compulsory voting must be a violation of freedom or free speech in some way. I have some of this reaction myself. But while one may reasonably oppose the idea of compulsory voting, the case that it violates a specific law or constitutional right is difficult to make. Indeed, the original 1777 constitution of the state of Georgia specifically called for a potential penalty of five pounds for not voting–although it also allowed an exception for those with a good explanation. If the U.S. government can require you to pay money for taxes, or compel you to serve on jury duty, or institute a military draft, it probably has the power to require that you show up and vote. Of course, a compulsory voting law would almost certainly include provisions for conscientious objectors to voting, and you would be permitted to turn in a totally blank ballot if you wish.  The penalties for not voting would be an inconvenience, but far from draconian.

For a review of the various legal and constitutional ins and outs of compulsory voting, along with some of the practical arguments, I recommend this anonymous 2007 note in the Harvard Law Review, called \”The Case for Compulsory Voting.\”

The author points out (footnotes omitted): \”Approximately twenty-four nations have some kind of compulsory voting law, representing 17% of the world’s democratic nations. The effect of compulsory voting laws on voter turnout is substantial. Multivariate statistical analyses have shown that compulsory voting laws raise voter turnout by seven to sixteen percentage points.\”

The anonymous author also offers what seem to me ultimately the two strongest arguments for compulsory voting. The first argument is that a larger turnout will (arguably) provide a more accurate representation of what the public wants, and in that sense will strengthen the bond between the electorate and its elected representatives. The second and more subtle argument is that compulsory voting would mean that political parties could focus much less on voter turnout. Less money and effort could go into turning out the vote, and more into persuasion.  Those who now vote almost certainly have stronger partisan feelings, on average, than those who don\’t vote. So politicians aim their advertisements and strategies at that more partisan group. Many negative campaign ads attempt to reduce turnout for a candidate: if turnout was high, the usefulness of such negative ads could be diminished. A broader spectrum of voters would push candidates to offer a broader spectrum of messages to appeal to those voters, and groups that now have low turnout would find themselves equally courted by politicians.

The question becomes whether these potential benefits to the democracy as a whole are worth the imposition of compulsory voting. The anonymous writer in the Harvard Law Review offers what is surely meant to be an attention-grabbing and paradoxical-sounding conclusion: \”Although there are several legal obstacles to compulsory voting, none of them appear to be substantial enough to bar compulsory voting laws. … The biggest obstacle to compulsory voting is the political reality that compulsory voting seems incompatible with many Americans’ notions of individual liberty. As with many other civic duties, however, voting is too important to be left to personal choice.\”

How might one respond to these arguments? Perhaps the most obvious answer is that if one looks at the countries that have compulsory voting–say, Brazil, Australia, Peru, Thailand–it\’s not obvious that their politics are characterized by greater appeals to the nonpartisan middle, or that the bond between the population and its elected representatives is especially strong.

For a more detailed deconstruction , I recommend a 2009 essay by Annabelle Lever in Public Reason magazine, \”Is Compulsory Voting Justified?\”  Basically, her argument comes down to a belief that the potential gains from compulsory voting are unproven and unsupported by evidence in countries that have tried it, while the lost freedom from compulsory voting would be definite and real.

In Lever\’s view, the evidence that exists doesn\’t show that political parties start competing for the middle in a different way, nor that outcomes are different. For example, northern European social democratic countries like Sweden don\’t have compulsory voting, and do have declining voter turnout.
 f people are disinterested or disillusions and don\’t want to vote for the existing candidates, it\’s not clear that threatening them with a criminal offense for not voting will build connections from the population to elected representatives. If political parties don\’t need to focus on turnout, they will immediately turn to other ways of identifying swing groups and wedge issues.  The penalties for not voting may not look large in some broad sense, but be clear: when we enter the realm of compulsory voting, we are talking about criminal behavior. will need to decide how large the fines or other penalties will be, and what happens to those (and there will be some!) who refuse to pay. If not voting is a crime, we will be making a lot of people into criminals–maybe guilty of only a minor crime, but still recorded in our information-technology society as breaking the law. It is by no means clear that having a right to vote should be reinterpreted as having a legal duty to vote: there are many rights that one may choose to exercise, or not, as one prefers. In a free society, the right to be left alone has some value, too. Lever concludes:

\”I have argued that the case for compulsory voting is unproven. It is unproven because the claim that compulsion will have beneficial results rests on speculation about the way that nonvoters will vote if they are forced to vote, and there is considerable, and justified, controversy on this matter. Nor is it clear that compulsory voting is well-suited to combating those forms of low and unequal turnout that are, genuinely, troubling. On the contrary, it may make them worse by distracting politicians and voters from the task of combating persistent, damaging, and pervasive forms of unfreedom and inequality in our societies.

\”Moreover, I have argued, the idea that compulsory voting violates no significant rights or liberties is mistaken and is at odds with democratic ideas about the proper distribution of power and responsibility in a society. It is also at odds with concern for the politically inexperienced and alienated, which itself motivates the case for compulsion. Rights to abstain, to withhold assent, to refrain from making a statement, or from participating, may not be very glamorous, but can be nonetheless important for that. They are necessary to protect people from paternalist and authoritarian government, and from efforts to enlist them in the service of ideals that they do not share. Rights of non-participation, no less than rights of anonymous participation, enable the weak, timid and unpopular to protest in ways that feel safe and that are consistent with their sense of duty, as well as self-interest. … People must, therefore, have rights to limit their participation in politics and, at the limit, to abstain, not simply because such rights can be crucial to prevent coercion by neighbours, family, employers or the state, but because they are necessary for people to decide what they are entitled to do, what they have a duty to do, and how best to act on their respective duties and rights.\”

I don\’t know of any recent polls on how Americans feel about compulsory voting, but a 2004 poll by ABC News found 72% opposed–a slightly higher percentage than a poll taken 40 years earlier on the same subject. These kinds of results from nationally representative polls pose an additional level of irony. If Americans as a group are strongly opposed to laws that would require compulsory voting, it seems problematic to glide around this opposition into an argument that, really, although they don\’t know it yet, they would feel better off with compulsory voting.

In a 2004 essay on compulsory voting (in this volume), Maria Gratschew points out that a number of countries in western Europe that used to have compulsory voting have have moved away from it in recent decades: Austria, Italy, Greece, and Netherlands. In discussing the decision by Netherlands to drop its compulsory voting laws in 1967, Gratschew writes: \”A number of theoretical as well as practical arguments were put forward by the committee: for example, the right to vote is each citizen\’s individual right which he or she should be free to exercise or not; it is difficult to enforce sanctions against non-voters effectively; and party politics might be livelier if the parties had to attract the voters\’ attention, so that voter turnout would therefore reflect actual participation and interest in politics.\”

Compulsory voting is one of those intriguing roads that looks better when not actually traveled.

Minimum Wage to $9.50? $9.80? $10?

During the 2008 campaign, President Obama promised to raise the minimum wage to $9.50/hour by 2011. This pledge was made at a time when the economic slowdown was already underway: the recession started in December 2007. The pledge was also made at a time when an increase in the minimum wage was already underway: in May 2007, President Bush has signed into law an increase in the minimum wage, to rise in several stages from $5.15 to $7.25 in July 2009.

Last summer, some Democratic Congressmen tried to push the issue a bit. In June, 17 House Democrats signed on as co-sponsors of a bill authored by Rep. Jesse Jackson Jr. of Illinois for an immediate rise in the minimum wage to $10/hour–and then to index it to inflation in the future. In July, over 100 Democrats in the House of Representatives signed on as co-sponsors of a bill authored by Rep. George Miller of California to raise the federal minimum wage to $9.80/hour over the next three years–and then to index it to inflation after that point. But while raising the minimum wage was a hot issue in the years before Bush signed the most recent increases into law, these calls for a still-higher minimum wage got little attention.

For background, here are a couple of graphs about the U.S. minimum wage. The first graph shows the nominal minimum wage over time, and also the real minimum wage adjusted to 2011 dollars. In real terms, the increase in the minimum wage from 2007 to 2009 didn\’t quite get it back to the peak levels of the late 1960s, but did return it to the levels of the early 1960s and most of the 1970s–as well as above the levels that prevailed during much of the 1980s and 1990s. The second graph shows the minimum wage as a share of the median wage for several countries, using OECD data. The U.S. has the lowest ratio of minimum wage/median wage–and given the greater inequality of the U.S. income distribution, the U.S. ratio would look lower still if compared to average wages. However, because of the rise from 2007-2009, the U.S. economy has experienced the largest rise in the minimum wage from 2006-2011. (Thanks to Danlu Hu for producing these graphs.)

So why didn\’t calls for a higher minimum wage in summer 2012 get more political traction? 

1) The unemployment rate in May 2007 was 4.4%, and had been below 5% for 18 months. The unemployment rate last summer was around 8.2%, and had  been above 8% for more than 40 months. Thus, there was a lot less reason in May 2007 to worry about the risk that a higher minimum wage might reduce the number of jobs for unskilled labor than their was in summer 2012.

2) In summer 2012, average wage increases had not been looking good for most workers for several years, which made raising the minimum wage seem less appealing as a matter of fairness.

3) The increase in the minimum wage that President Bush signed into law that took effect from 2007 to 2009 made it feel less urgent to raise the minimum wage still further.

4) Some states have set their own minimum wages, at a level above the U.S. minimum wage. The U.S. Department of Labor has a list of state minimum wage laws here: for example, California has a minimum wage of $8/hour and Illinois has a minimum wage of $8.25/hour. Thus, at least some of those jurisdictions who favor a higher minimum wage are getting to have it.

5) In summer 2012, the Democratic establishment was focused on re-electing President Obama, and since raising the minimum wage was not part of his active agenda, it gave no publicity or support to the calls for a higher minimum wage.

6) In the academic world, there was a knock-down, drag-out scrum about the minimum wage going through much of the 1990s. David Card and Alan Krueger published a much-cited paper in 1994 in the American Economic Review, comparing minimum wage workers in New Jersey and Pennsylvania, and found that the different minimum wages across states had no effect on employment levels. (“Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania.”American Economic Review, September 1994, 84(4), pp. 772–93.). This conclusion was heavily disputed, and for those who want to get their hands dirty, the December 2000 issue of the American Economic Review had 30+ pages of critique of the Card-Krueger paper and 30+ pages of response. I won\’t seek to mediate that dispute here. But I think that the academics who were driving the arguments had sort of exhausted themselves by the time the 2007 legislation passed, and no one seemed to be slavering for a rematch.

I was lukewarm on the rise in the minimum wage that was enacted in 2007. It seems to me that there are better ways to help low-wage workers. But that said, if the minimum wage isn\’t very far above the market wage for unskilled labor (and in some places may even be below the market wage for unskilled labor), there\’s no reason to believe that it will have large effects on employment. However, raising the minimum wage further to the range of $9.50/hour or $10/hour would in many parts of the country push well above the prevailing wage for unskilled labor, especially in a still-weak economy, and so the effects on employment would be more deleterious.

I tried to explain some of the other policy issues raised by a higher minimum wage in my book The Instant Economist: Everything You Need to Know About How the Economy Works, published earlier this year by Penguin Plume.

\”Here’s an insight for opponents of a higher minimum wage to mull over: Let’s say a 20 percent rise in the minimum wage leads to 4 percent fewer jobs for low-skilled workers (as some of the evidence suggests). But this also implies that a higher minimum wage leads to a pay raise for 96 percent of low-skilled workers. Many people in low-skill jobs don’t have full-time, year-round jobs. So perhaps these workers work 4 percent fewer hours in a year, but they get 20 percent higher pay for the hours they do work. In this scenario, even if the minimum wage reduces the number of jobs or the number of hours available, raising it could still make the vast majority of low-skilled workers better off, as they’d work fewer hours at a higher wage.
\”There’s another side to the argument, however. The short-term costs to an individual of not being able to find a job are quite large, while the benefits of slightly higher wages are (relatively speaking) somewhat smaller, so the costs to the few who can’t find jobs because of a higher minimum wage may be in some sense more severe than the smaller benefits to individuals who are paid more. Those costs of higher unemployment are also unlikely to be spread evenly across the economy; instead, they are likely to be concentrated in communities that are already economically disadvantaged. Also, low-skill jobs are often entry-level jobs. If low-skill jobs become less available, the bottom rung on the employment ladder becomes less available to low-skilled workers. Thus, higher minimum wages might offer modest gains to the substantial number of low-skilled workers who get jobs, but impose substantial economic injury on those who can’t.
\”There are alternatives to price floors, and economists often tend to favor such alternatives because they work with the forces of supply and demand. For example, if a government wants to boost wages for low-skilled workers, it could invest in skills-training programs. This would enable some of those workers to move into more skills-driven (and better paying) positions and would lower the supply of low-skilled labor, driving up their wages as well. The government could subsidize firms that hire low-skilled workers, enabling the firms to pay them a higher wage. Or it could subsidize the wages of low-skilled workers directly through programs such as the Earned Income Tax Credit, which provides a tax break to workers whose income is below a certain threshold. This policy increases the workers’ net income without placing any financial burden on the employers.\”

What I didn\’t point out in the book is the political dynamic that raising the minimum wage allows politicians to pretend that they are helping people at zero cost–because the costs don\’t appear as taxes and spending. But pushing up the minimum wage substantially now, after the recent increases and in a still-struggling economy, does not strike me as wise policy.

Addendum: Thanks to reader L.S. who let me know that my argument here–a minimum wage law can play a useful redistribution function under certain labor market assumptions, but in general it is better for the government to move to a lower minimum wage and higher government support for low-wage workers–is quite similar to the more formal case made by David Lee and Emmanuel Saez in their recent Journal of Public Economics article, \”Optimal minimum wage policy in competitive labor markets.\”

Economics and Natural Disasters

In the aftermath of Hurricane Sandy, many teachers and students of economics will find themselves searching for background materials that provide some background in the economics of natural disasters. Here are a few examples from the last few years.

David Stromberg laid out the economic arguments about natural disasters in \”Natural Disasters, Economic Development, and Humanitarian Aid,\” appearing in the Summer 2007 issue of my own Journal of Economic Perspectives. (This article, like all JEP articles back to the start of the journal 1987, is freely available to all courtesy of the American Economic Association.) Stromberg makes the fundamental point that the economic analysis of natural disasters is built on three factors: the incidence of the natural disasters themselves, the number of people exposed to the disaster, and the vulnerability of the population to that disaster. In fact, Stromberg traces this distinction back to letters between Voltaire and Rousseau in the aftermath of the great Lisbon earthquake of 1755. Voltaire had written a poem on how terrible the earthquake was; Rousseau had responded by pointing out that it was not the quake, but the interaction between human society and the quake, which was at issue. Here\’s Stromberg (footnotes and citations omitted):

\”[I]n 1755 an earthquake devastated Lisbon, which was then Europe’s fourth-largest city. At the first quake, fissures five meters wide appeared in the city center. The waves of the subsequent tsunami engulfed the harbor and downtown. Fires raged for days in areas unaffected by the tsunami. An estimated 60,000 people were killed, out of a Lisbon population of 275,000. In a letter to Voltaire dated August 18, 1756, Jean-Jacques Rousseau notes that while the earthquake was an act of nature, previous acts of men, like housing construction and urban residence patterns, set the stage for the high death toll. Rousseau wrote: “Without departing from your subject of Lisbon, admit, for example, that nature did not construct twenty thousand houses of six to seven stories there, and that if the inhabitants of this great city had been more equally spread out and more lightly lodged, the damage would have been much less and perhaps of no account.”

\”Following Rousseau’s line of thought, disaster risk analysts distinguish three factors contributing to a disaster: the triggering natural hazard event (such as the earthquake striking in the Atlantic Ocean outside Portugal); the population exposed to the event (such as the 275,000 citizens of Lisbon); and the vulnerability of that population (higher for the people in seven-story buildings).\”

Of course, this insight implies that what events are classified as a \”natural disaster\” is not just about size of the natural event, but about how many people are affected. Thus, the WHO Collaborating Centre for Research on the Epidemiology of Disasters (CRED) maintains an Emergency Events DatabaseCenter that colleges data on natural disasters, where a disaster is defined as 10 or more people reported killed, 100 or more people reported affected, a declaration of a state of emergency, or a call for international assistance. Here are some of their figures showing global trends in natural disasters from 1975 through 2011. The first graph shows the number of such disasters over time: the total was rising into the early 2000s, but has leveled off since then.

The second and third graph show the number of people killed, and the number of people affected by such disasters. The trendline for number of people killed has been dropping over time, with occasional spikes: in 2010,  the earthquake in Haiti, or in 2007, the cyclone that hit Myanmar and the major earthquake in China. However, the number of people affected by natural disasters is rising over time, which one would expect as a result of growing population levels, if nothing else.

Finally, the fourth graph shows monetary losses from natural disasters. Of course, this graph is driven by whether the disasters hit high-income or middle-income countries, where the measured economic costs of damage are higher than in low-income countries.

The best way of dealing with natural disasters is often before they occur: early warning systems, advance planning, encouraging natural protections like minimizing deforestation or protecting wetlands, building codes, flood control, and more. For a nice overview of such efforts around the world, I recommend the 2010 report on \”Natural Hazards, UnNatural Disasters: The Economics of Effective Prevention,\” from the World Bank. The report begins: \”The adjective “UnNatural” in the title of this report conveys its key message: earthquakes, droughts, floods, and storms are natural hazards, but the unnatural disasters are deaths and damages that result from human acts of omission and commission. Every disaster is unique, but each exposes actions—by individuals and governments at different levels—that, had they been different, would have resulted in fewer deaths and less damage. Prevention is possible, and this report examines what it takes to do this cost-effectively.\”

The World Bank report is focused more on low-income countries, but similar lessons about prevention apply to high-income countries, as well. In the New York Times on Tuesday, David W. Chen and Mireya Navarro discuss \”For Years, Warnings That It Could Happen Here.\” They talk about proposals that have been floating around the New York metro area for years now about levy systems, storm surge barriers, floodgates in subways,  moving people and economic activity away from low-lying areas, and in general having plans in place. Not many storms will pack the wallop of Hurricane Sandy, but New York City is a huge agglomeration of people living on a coastline who will inevitably be susceptible to storm and flood damage.

Two other quick references: First, the National Flood Insurance Program will almost certainly not have the money to pay for the damage from Hurricane Sandy. For background on that program and how it works, and why its inability to fund these damages was completely predictable, Erwann O.  Michel-Kerjan, lays it out in \”Catastrophe Economics: The National Flood Insurance Program,\” in the Fall 2010 issue of my own Journal of Economic Perspectives. 

Second, the aftermath of natural disasters is often a motivation for teachers of economics to discuss the extent to which, even if price restrictions don\’t make economic sense most of the time, they might be justifiable in the aftermath of a natural disaster to prevent \”price-gouging.\” Michael Giberson of Texas Tech University a nice readable essay on \”The Problem with Price Gouging Laws\” in the Spring 2011 issue of Regulation magazine. I blogged about the article here. He points out that 31 states have such laws, and that the completely predictable problems with such laws are that they
discourage bringing supplies into disaster areas, they discourage conserving on key resources, they concentrate economic losses on local merchants, and they worsen the economic losses in the disaster area.

72 is the New 30?!!

Everyone knows that human life expectancies have been improving. But just how extraordinary and incomparable that improvement has been is not widely understood. Demographers Oskar Burgera, Annette Baudischa, and James W. Vaupel offer two remarkable sets of comparisons in \”Human mortality improvement in evolutionary context,\” which appears in a recent issue of the Proceedings of the Natural Academy of Sciences (October 30, 2012, vol. 109, no. 44, 18210-18214). From their abstract:

\”The health and economic implications of mortality reduction have been given substantial attention, but the observed malleability of human mortality has not been placed in a broad evolutionary context. We quantify the rate and amount of mortality reduction by comparing a variety of human populations to the evolved human mortality profile, here estimated as the average mortality pattern for ethnographically observed hunter-gatherers. We show that human mortality has decreased so substantially that the difference between hunter-gatherers and today’s lowest mortality populations is greater than the difference between hunter-gatherers and wild chimpanzees. The bulk of this mortality reduction has occurred since 1900 and has been experienced by only about 4 of the roughly 8,000 human generations that have ever lived. Moreover, mortality improvement in humans is on par with or greater than the reductions in mortality in other species achieved by laboratory selection experiments and endocrine pathway mutations.\”

 Their first main set of comparisons is to look at human mortality declines in a very long-run evolutionary context: from hunter-gatherers to modern humans. They focus to some extent on modern Sweden and Japan as examples of the highest life expectancies for modern humans (in what follows \”y\” is the writers\’ abbreviation for \”years, and footnotes and references to figures are omitted).

\”That is, Swedes in 1900 had mortality profiles closer to hunter-gatherers than to the Swedes of today. This relative difference between Swedes recently and those 100 y ago has emerged in a rapid revolutionary leap, as this distance is far greater than that between hunter-gatherers and chimps. The recent jumps in mortality reduction are remarkable in the context of mammal diversity because age-specific death rates for hunter-gatherers are already exceptionally low, probably among the lowest of any nonhuman primate or terrestrial mammal (especially if body size is controlled for), and lower than even captive chimpanzees at all ages. The human mortality profile, however, is so plastic that over the past century the populations doing best managed to achieve very large reductions in death rates that were already low compared with those of other species. …

\”For example, hunter-gatherers at age 30 have the same probability of death as present-day Japanese at the age of 72: hence the age of a person in Japan that is equivalent to a 30-y-old hunter-gatherer is 72. In other words, compared with the evolutionary pattern, 72 is the new 30. …

\”In gross comparative terms, this means that during evolution from a chimp-like ancestor to anatomically modern humans, mortality levels once typical of prime-of-life individuals were pushed back to later ages at the rate of a decade every 1.3millions years, but the mortality levels typical of a 15-y-old in 1900 became typical of individuals a decade older about every 30 y since 1900.\”

Their second main set of comparisons is to look at the gain in human mortality compared with the gains in mortality achieved in laboratory situations, by manipulating the genetics and the environment of fruit flies, nematode worms, mice, and the like.

\”Fruit fly selection experiments achieve significant extensions in life span by rearing successive generations from eggs laid by old individuals. In one classic example, mean life span increased by about 30% in 15 generations , for a rate of change of almost 2% per generation, and in another by about 100% in 13 generations, or just over 5% per generation. For human hunter-gatherers, mean life span at birth is about 31 …  For Swedes, it was about 32 in 1800, 52 in 1900, and is 82 today. So life expectancy increased by about 165% from hunter-gatherers to modern Swedes and at a rate of about 12% per generation since 1800.

\”Some of the most promising directions for understanding the physiological mechanisms of aging come from experiments with mutations that affect the endocrine pathway. These impressive experiments have extended mean life span in nematode worms by >100%, fruit flies by ∼85% , and laboratory mice by ∼50%. Dietary restriction, which involves suppressing caloric intake of an organism, has extended life span in nematodes by 100–200%, fruit flies by ∼100%, and mice by ∼50%. Hence recent human mortality improvement is often greater than that achieved by manipulated strains of model organisms relative to the wild type, especially when single mutations or
physiological pathways are manipulated. However, experiments that simultaneously manipulate multiple pathways in organisms such as yeast and nematode worms can achieve much greater life span extensions. The majority of laboratory studies where mammals are the model organism have been done on mice and yield percentage life span increases less than those gained by humans.\”

It\’s unclear just what the recent changes in human life expectancy mean for the long run, because they are so without parallel either in the evolutionary record or in the lab. It seems unlikely that the huge gains in human life expectancy since 1900 or so can be related to large changes in genetics or physiological processes: not enough generations have passed. As the authors ask: \”Why does the human genome give humans a license to drastically reduce mortality by nongenetic change?\” The answer is not yet clear, but what is clear is that there is a \”biologically unique\” plasticity in the human mortality decline that has already occurred.

Driverless Cars

Automobile travel transformed how people relate to distance: it decentralized how people live and work, and gave them a new array of choices for everything from the Friday night date to the long-distance road trip. I occasionally marvel that we can take our family of five, with all our gear, door-to-door for a getaway to a YMCA family camp 250 miles away in northern Minnesota–all for the marginal cost of less than a tank of gas. Driverless cars may turn out to be one of those rare inventions that transform transportation even further. KPMG and the nonprofit Center for Automotive Research published a report on \”Self-driving cars: The next revolution.\” last August. It\’s available from the KPMG website here, and from the CAR website here. I missed the report when it first came out, but then saw this story about it in a recent issue of the Economist magazine.

Many people have heard about the self-driving cars run by Google that have already driven over 200,000 miles on public roads. The report makes clear that automakers are taking this technology very seriously as well, and developing the range of sensor-based and connected-vehicle technologies that would be needed to make this work. Examples of the technology include the Light Detection and Ranging (LIDAR) equipment that does 360-degree sensing around a car. The LIDAR systems that Google retrofitted into cars costs about $70,000 per car. Dedicated Short-Range Communication (DSRC) has certain standards and a designated frequency for short-range communication, and can thus be focuses on vehicle-to-vehicle and vehicle-to-infrastructure communication. Ultimately, these would be tied together so that self-driving cars could travel closely together in \”platoons\” and minimize traffic congestion. And it\’s not just Google and the car companies. Intel Capital, for example, recently launched at $100 million Connected Car Fund.   

 Of course, it is always possible that driverless cars will run up against insurmountable barriers. But skeptics should remember that the original idea of the automobile looked pretty dicey as well. Millions of adults will be personal driving motorized vehicles at potentially high speeds? They will drive on a network of publicly provided roads that will reach all over the country? They will fill these vehicles with flammable fuel that will be dispensed by tens of thousands of small stores all over the country? If the social gains seem large enough, technologies often have a way of emerging. With driverless cars, what are some of the gains? Quotations are from the report: as usual, footnotes are omitted for readability.

Costs of Traffic Accidents
Self-driving cars have the potential to save tens of thousands of lives, and prevent hundreds of thousands of injuries, every year.  \”In 2010, there were approximately six million vehicle crashes leading to 32,788 traffic deaths, or approximately 15 deaths per 100,000 people. Vehicle crashes are the leading cause of death for Americans aged 4–34. And of the 6 million crashes, 93 percent are attributable to human error. … More than 2.3 million adult drivers and passengers were treated in U.S. emergency rooms in 2009. According to research from the American Automobile Association (AAA), traffic crashes cost Americans $299.5 billion annually.\” Moreover, an enormous reduction in crash risk would allow a redesign of cars to be much lighter.

Costs of Infrastructure

Driverless cars will allow many more cars to use a highway simultaneously. \”An essential implication for an autonomous vehicle infrastructure is that, because efficiency will improve so dramatically, traffic capacity will increase exponentially without building additional lanes or roadways. Research indicates that platooning of vehicles could increase highway lane capacity by up to 500 percent. It
may even be possible to convert existing vehicle infrastructure to bicycle or pedestrian uses. Autonomous transportation infrastructure could bring an end to the congested streets and extra-wide highways of large urban areas.\”

They could reduce the cost of design of highways. \”[T]oday’s roadways and supporting infrastructure must accommodate for the imprecise and often-unpredictable movement patterns of human-driven vehicles with extra-wide lanes, guardrails, stop signs, wide shoulders, rumble strips and other features
not required for self-driving, crashless vehicles. Without those accommodations, the United States could significantly reduce the more than $75 billion it spends annually on roads, highways, bridges, and other infrastructure.\”

Driverless cars will alter the need for parking.  Imagine that your car will drop you at your office door, head off to park itself, and come back when you call it.  \”In his book ReThinking a Lot (2012), Eran Ben-Joseph notes, “In some U.S. cities, parking lots cover more than a third of the land area, becoming the single most salient landscape feature of our built environment.”\”

Costs of Time
Driverless cars might be faster, but in addition, they open up the possibility of using travel time for work or relaxation. Your car could become a rolling office, or a place for watching movies, or a place for a nap. \”An automated transportation system could not only eliminate most urban congestion, but it would also allow travelers to make productive use of travel time. In 2010, an estimated 86.3 percent of all workers 16 years of age and older commuted to work in a car, truck, or van, and 88.8 percent of those drove alone … The average commute time in the United States is about 25 minutes.
Thus, on average, approximately 80 percent of the U.S. workforce loses 50 minutes of potential productivity every workday.  With convergence, all or part of this time is recoverable. Self-driving vehicles may be customized to serve the needs of the traveler, for example as mobile offices, sleep pods, or entertainment centers.\” I find myself imagining the overnight road-trip, where instead of driving all day, you sleep in the car and awake at your destination. 
 

Costs of Energy
The combination of much lighter cars, being driven much more efficiently, could dramatically reduce energy use. Lighter cars use less fuel: \”Vehicles could also be significantly lighter and more energy
efficient than their human-operated counterparts as they no longer need all the heavy safety features, such as reinforced steel bodies, crumple zones, and airbags. (A 20 percent reduction in weight corresponds to a 20 percent increase in efficiency.)\”  \”Platooning alone, which would reduce the effective drag coefficient on following vehicles, could reduce highway fuel use by up to 20 percent…\” \”According to a report published by the MIT Media Lab, “In congested urban areas, about 40 percent of total gasoline use is in cars looking for parking.\”

 Costs of Car Ownership
Most cars are unused for 22 hours out of every day. I already know people in cities like New York who own a car, but keep it in storage for out-of-city trips. I know people who use companies like ZipCar, a membership-based service that lets you have a car for a few hours when you need it. Driverless cars may offer a replacement for car ownership. Need a car? A few taps on your smart-phone and one will come to meet you, and take you where you want to be. The price will of course be lower if you don\’t mind being picked up in an automated carpool. 

Mobility for the Young and the Old
Imagine being an elderly person who has become uncomfortable with driving, at least at certain times or under certain conditions. Driverless cars would offer continues mobility. Imagine being able to put your teenager in a car and have them safely delivered to their destination. Imagine always having a safe ride home after a night on the town.

How Fast?
The fully self-driving car isn\’t right around the corner. Clearly, costs need to come down substantially and a number of complementary technologies need to be created. However, we do already have cars in the commercial market with cruise control and anti-lock brakes, as well as cars that sense potential crash hazards and can parallel park themselves. Changes like these happen slowly, and then in a rush. As the report notes, \”The adoption of most new technologies proceeds along an S-curve, and we believe the path to self-driving vehicles will follow a similar trajectory.\” Maybe 10-15 years? Faster?

How Retirement Age Tracks Social Security\’s Rules

Back in 1983, as one of the steps taken to bolster the long-run finances of the Social Security System, was to phase in a rise in the \”normal\” or \”full\” retirement age. The normal retirement age for receiving full Social Security benefits had been 65, with \”early retirement\” with lower benefits possible at age 62. Under the new rules, the normal retirement age remained 65 for those born in 1937 or earlier–and thus turning 65 before 2002. It then phased up by 2 months per year, so that for those born six years later in 1943 or after, the normal retirement age is now 66. Written into law is a follow-up increase where a rise in the normal retirement age from 66 to 67 will be phased in, again at a rate of two months per year,  for those born from 1955 to 1960. 

How has this change altered actual retirement patterns? What are the reasons, either for retirees or for the finances of Social Security, to encourage still-later retirement?

Economists have long recognized that what a government designates as the \”normal\” retirement age has a big effect on when people actually choose to retire. Luc Behaghel and David M. Blau present some of the recent evidence in \”Framing Social Security Reform: Behavioral Responses
to Changes in the Full Retirement Age,\” which appears in the November 2012 American Economic Journal: Economic Policy (4(4): 41–67). (The journal isn\’t freely available on-line, but many in academia will have access through a library subscription.)

Consider the following graphs from Behaghel and Blau. Each one is for those born in a different year, from 1937 up through 1942, as the normal retirement age phased up. These people are the ones hitting the normal retirement age of 65 in the early and mid-2000s. The solid line shows the probability of retirement at each age. The early retirement age of 62 is marked with a vertical red line; the previous normal retirement age of 65 is marked with a vertical red line; and the actual retirement age for that year as it phases up two months per year is marked with a vertical red line. The dashed line, which is the same in all the figures, shows for comparison the retirement pattern for those born over the 1931-1936 period.

The main striking pattern is that the probability of retiring at a certain age almost exactly tracks the changes in the normal retirement age: that is, the solid line spikes at the red vertical line showing the normal retirement age. There is also a spike at the early retirement age of 62. Here are the patterns.

The evidence here seems clear: People are making their retirement choices in synch with the government-set normal retirement age. This pattern isn\’t new, as the authors point out, a spike in retirement age at 65 became visible in the data back in the early 1940s, about five years after Social Security became law. Still, the obvious question (for an economist) is why people would make this choice. If you retire later than the normal retirement age, your monthly benefits are scaled up, so from the viewpoint of overall expected lifetime payments, you don\’t gain from retiring earlier. A number of possible explanations have been proposed: 1) people don\’t have other sources of income and need to take the retirement benefits as soon as possible for current income; 2) people are myopic, or don\’t recognize that their monthly benefits would be higher if they delayed retirement; 3) many people are waiting until age 65 to retire so that they can move from their employer health insurance to Medicare; 4) some company retirement plans encourage retiring at age 65.

However, none of these explanations give an obvious reason for why the retirement age would exactly track the changes in Social Security normal retirement age, so it seems as if a final \”behavioral\” explanation is that the \”normal\” retirement age announced by the government, whatever it is, is then treated by many people as a recommendation that should be taken. Choosing a retirement date in this way is probably suboptimal both for individuals and for the finances of the Social Security system.

From the standpoint of individuals, there\’s a widespread sense among economists that many retirees would benefit from having more of their wealth in annuities–that is, an amount that would pay out no matter how long they live. In the Fall 2011 issue of my own Journal of Economic Perspectives, 
Shlomo Benartzi, Alessandro Previtero, and Richard H. Thaler have an article on \”Annuitization Puzzles,\” which makes the point that when you delay receiving Social Security, you are in effect buying an annuity: that is, you are taking less in the present–which is similar to \”paying\” for the annuity– in exchange for a larger long-term payment in the future. They write: \”[T]he easiest way to increase the amount of annuity income that families have is to delay the age at which people start claiming Social Security benefits. Participants are first eligible to start claiming benefits at age 62, but by waiting to begin, the monthly payments increase in an actuarially fair manner until age 70. \”

They further argue that a good starting point to encouraging such behavior would be to re-frame the way in which the Social Security Administration, and all the rest of us, talk about Social Security benefits. Imagine that, with no change at all in the current law, we all started talking about a \”standard retirement age\” of 70. We pointed out that you can retire earlier, but if you do, monthly benefits will be lower. If the choice of when to retire was framed in this way, my strong suspicion is that many more people would react differently than when we announce that the \”normal retirement age\” is 66, and if you wait then your monthly benefits will be higher. Again, people seem to react to what the government designates as the target for retirement age.

However, this labeling change might encourage people to work longer, but it would not affect the solvency of the Social Security system, because those who wait longer to retire are, in effect, paying for their own higher monthly benefits by delaying the receipt of those benefits. However, the Social Security actuaries offer a number of illustrative calculations on their website about possible steps to bolster the financing of the system. One proposal about phasing back the normal age of retirement looks like this:  \”After the normal retirement age (NRA) reaches 67 for those age 62 in 2022, increase the NRA 2 months per year until it reaches 69 for individuals attaining age 62 in 2034. Thereafter, increase the NRA 1 month every 2 years.\”

Thus, this proposal would represent no change in the rules for Social Security benefits for anyone born before 1960–and thus in their early 50s at present. Under this proposal, those born after 1960 would face the gradual phase-in–but of course, they would also benefit from having a program that is much closer to fully funded. would face the same phase-in as currently exists. The actuaries estimate that this step by itself would address about 44% of the gap over the next 75 years between what Social Security has promised and the funding that is expected during that time. Given the predicted shortfalls of the Social Security system in the future, and the gains in life expectancy both in the last few decades and expected in the next few decades, and the parlous condition of large budget deficits reaching into the future, I would be open to proposals to phase in a more rapid and more sustained rise in the normal retirement age for Social Security benefits.

Time Watching Television

I recently ran across this historical data from the Neilson company for the time American households spend watching television, per day.

I rarely watch 8 hours of television per week, much less per day. I had a conversation the other day in which someone was incredulous that I have never seen an episode of Seinfeld, or Friends, or actually any sitcom in the last decade or so. I told them that I used to watch M*A*S*H now and then, and they looked at me with pity.

Economists sometimes quote the old proverb: \”De gustibus non est disputandum.\” There\’s no arguing over taste. We tend to accept consumer tastes and preferences as given, and proceed from there. I suppose that those of us who blog, and then hope for readers, can\’t really complain about those who spend time looking at a screen. I certainly have my own personal time-wasters, like reading an inordinate number of mysteries. I assume that for many people the television is on in the background of other activities. But at some deep level, I just don\’t understand averaging 8 hours of television per day. I always remember the long-ago jibe from the old radio comedian Fred Allen: \”Television is a medium because anything well done is rare.\”

Note: Thanks to Danlu Hu for downloading the data and creating this figure.

Would Inflation Help Cut Government Debt?

When teaching about the effects of an unexpected surge of inflation, I always point out that those who borrowed at a fixed rate of interest benefit from the inflation, because they can repay their borrowing in inflated (and less valuable) dollars. And sometimes I toss in the mock-cheerful reminder that the U.S. government is the single biggest borrower–and thus presumably has a vested interest in a higher rate of inflation. But presuming an easy connection from higher inflation to reduced government debt burdens is actually a more problematic policy than it may at first appear.

Menzie Chinn and Jeffry Frieden make a lucid case for how higher inflation could ease the way to a lower real debt burden in an essay in the Milken Institute Review (available on-line with free registration). They point out that after World War II the U.S. government had accumulated a total debt of more than 100% of GDP, but that it cut that debt/GDP burden in half in about 10 years with a combination of economic growth and about 4% inflation. They are at pains to point out that they aren\’t suggesting a lot of inflation. But as they see it, given the fact that debt/GDP ratios are extremely high by historical standards in the U.S. and in a number of other high-income countries, a quiet process of slowing inflating away some of the real value of the debt is far preferable to the messy process of governments threatening to default. They write:

\”Creditors, of course, receive less in real terms than they had contracted for – and probably less than they expected when they agreed to the contract.  That may seem unfair. But the outcome is little different than what happens to creditors when they are forced to accept the restructuring of their claims through one form of bankruptcy or another. …It’s important to remember, though, that
we are not suggesting a lot of inflation – certainly nothing like the double-digit rates that
followed the second oil shock in 1979 to 1981. Rather, we believe the goal should be to target
moderate inflation, only enough to reduce the debt burden to more manageable levels,
and adjust monetary policy accordingly. This probably means something in the 4 to 6 percent
range for several years. … We’re not claiming that inflation is a painless way to speed deleveraging. We are claiming,  though, that it is less painful than the realistic alternatives. … Unusual times call for unusual measures.\”

The counterargument, which holds that inflation may not do much to reduce debt/GDP ratios, starts from this insight: Yes, inflation reduces the outstanding value of past debt, and in a situation like the aftermath of World War II when large debts were incurred, but the borrowing then stops. For example, the U.S. government ran budget surpluses four out of the five years from 1947-1951. But if fiscal policy is on an unsustainable path of overly large deficits, then inflation isn\’t going to fix the problem. In an essay appearing in the Annual Report of the Federal Reserve Bank of Richmond, \”Unsustainable Fiscal Policy:Implications for Monetary Policy,\” Renee Haltom and John A. Weinberg make an argument that inflation would not actually offer much hope of reducing the current U.S government debt burden.

\”It is useful to consider how much inflation would be required to adequately reduce current debt levels. … To consider how much inflation would be required today to address current debt imbalances, Michael Krause and Stéphane Moyen (2011) estimate that a moderate rise in inflation to 4 percent annually sustained for at least 10 years—in effect a permanent doubling of the Fed’s inflation objective—would reduce the value of the additional debt that accrued during the 2008–09 financial crisis, not the total debt, by just 25 percent. If the rise in inflation lasted only two or three years, a 16 percentage point increase—from roughly 2 percent inflation today to 18 percent—would be required to reduce that additional debt by just 3 percent to 8 percent. Such inflation rates were not reached even in the worst days of the inflationary 1970s. The reason inflation has such a minimal impact on debt in Krause and Moyen’s estimates is that while inflation erodes the value of existing nominal debt, it increases the financing costs for newly issued debt because investors must be compensated to be willing to hold bonds that will be subject to higher inflation. This effect would be greater for governments such as the United States that have a short average maturity of government debt and therefore need to reissue it often.

\”With these estimates in mind, it is worth recalling the CBO’s projection that debt held by the public may triple as a percent of GDP within 25 years. The estimates cited above suggest that inflation is simply not a viable strategy for reducing such debt levels. In addition, it is important to remember that inflation is costly on many levels. Inflation high enough to significantly erode the debt would inflict considerable damage on the economy and would require costly policies for the Fed to regain its credibility after the fact. Inflation that was engineered specifically to erode debt would provide a significant source of fiscal revenue without approval via the democratic process, and so would
raise questions about the role of the central bank as opposed to the roles of Congress and the executive branch in raising fiscal revenues. Ultimately, the solution to high debt levels must come from fiscal authorities.\”

In a similar spirit, the IMF wrote in Chapter 3 of its most recent World Economic Outlook that inflation at low levels often seems to have little effect in reducing government debt: \”The relationship between inflation and [government] debt reduction is more ambiguous. Although hyperinflation is clearly associated with sharp debt reduction, when hyperinflation episodes are excluded, there is no clear association between the average inflation rate and the change in debt.\”

In short, if federal deficits are first definitively placed on a diminishing path, then a quiet surge of unexpected inflation could help in reducing the past debts. But on the current U.S. trajectory of a steadily-rising debt/GDP ratio over the next few decades, inflation isn\’t the answer–and could end up just being another part of the problem.

Can the Cure for Cancer Be Securitized?

The October 2012 issue of Nature Biotechnology offers several articles on the theme of \”Commercializing biomedical innovations.\” The opening \”Editorial\” sets the stage this way: \”Investment in biomedical innovation is not what it once was. Millions of dollars have fled the life sciences risk capital pool. The number of early venture deals in biotech is smaller than ever. Public markets are all but closed, biotech-pharma deals increasingly back-loaded with contingent,
rather than upfront, payments. Paths to market are more winding and stonier. Government cuts are closing laboratories and culling blue-sky research. Never has there been a more pressing need to look beyond the existing pools of funding and talent to galvanize biomedical innovation.\”

Thus, the papers look at a variety of interactions: interactions between universities and the biomed industry; different business models for biomed firms; how venture capital firms often seem to enter biomed start-ups \”too early,\” well before a commercial payoff can be expected; funding research through nonprofit foundations that promote free dissemination of any findings; and others. But my eye was particularly caught by a proposal by caught by three economists,  Jose-Maria Fernandez, Roger M. Stein and Andrew W. Lo, who offer a proposal for \”Commercializing biomedical research through securitization techniques.\”

 These authors point out a paradoxical situation in biomedical research. On one side, the research journals and even the news media are full of breakthrough developments, \”including gene therapies for previously incurable rare diseases, molecularly targeted oncology drugs, new modes of medical imaging and radiosurgery, biomarkers for drug response or for such diseases as prostate cancer and heart disease, and the use of human genome sequencing to find treatments for diseases that have confounded conventional medicine, not to mention advances in bioinformatics and computing power that have enabled many of these applications.\”

On the other side, the existing business structures for translating these developments into new products doesn\’t seem to be working well. \”Consensus is growing that the bench-to-bedside process of translating biomedical research into effective therapeutics is broken. … The productivity of big pharmaceutical companies—as measured by the number of new molecular entity and biologic license applications per dollar of R&D investment—has declined in recent years … Life sciences venture-capital investments have not fared much better, with an average internal rate of return of −1% over the 10-year period from 2001 through 2010 …\”

Fernandez, Stein, and Lo suggest that the fundamental problem is that the technological breakthroughs present a vast array of possibilities, but these possibilities are complex and costly to pursue. A large portfolio of new biomed innovations is probably, overall, a money-maker. But when firms need to think about pursuing just a few of the many possibilities, at great cost, they may often decide not to do so. They write:

\”The traditional quarterly earnings cycle, real-time pricing and dispersed ownership of public equities imply constant scrutiny of corporate performance from many different types of shareholders, all pushing senior management toward projects and strategies with clearer and more immediate payoffs, and away from more speculative but potentially transformative science and translational research. … Industry professionals cite the existence of a ‘valley of death’—a funding gap between basic biomedical research and clinical development. For example, in 2010, only $6–7 billion was spent on translational efforts, whereas $48 billion was spent on basic research and $127 billion was spent on clinical development that same year.\”

What\’s their alternative? \” We propose an alternative for funding biomedical innovation that addresses these issues through the use of ‘financial engineering’… Our approach involves two components: (i) creating large diversified portfolios—‘megafunds’ on the order of $5–30 billion—of biomedical projects at all stages of development; and (ii) structuring the financing for these portfolios as combinations of equity and securitized debt so as to access much larger sources of investment capital. These two components are inextricably intertwined: diversification within a single entity reduces risk to such an extent that the entity can raise assets by issuing both debt and equity, and the much larger capacity of debt markets makes this diversification possible for multi-billion-dollar portfolios of many expensive and highly risky projects. … In a simulation using historical data for new molecular entities in oncology from 1990 to 2011, we find that megafunds of $5–15 billion may yield average investment returns of 8.9–11.4% for equity holders and 5–8% for ‘research-backed obligation’ holders, which are lower than typical venture-capital hurdle rates but attractive to pension funds, insurance companies and other large institutional investors.

Frankly, I have no clear idea about whether the Fernandez, Stein, and Lo approach raising money for biomed companies is viable. One never knows in advance whether an innovation will function well and fulfill real-world needs, whether that innovation is financial or real. But if the markets can put together this sort of deal, it might offer an enormous boost to the process of translating biomedical innovation into actual health care products. In the aftermath of the Great Recession, the words \”financial innovation\” are often spoken with a heavy dose of sarcasm, as if all we need for a 21st-century economy is good old passbook savings accounts. But financial innovation like this Fernandez, Stein and Lo proposal is an example of how financial innovation might save lives by addressing an important real-world problem. This financial innovation seems well worth someone trying it out–with the proviso that if it doesn\’t work, no one gets bailed out!

When Rent Control Ended in Cambridge, Mass.

Every intro class teaches about price ceilings, and I suspect that 99% of them use rent control laws as an example. Of course, the standard lesson from a supply-and-demand diagram is that price ceilings lead to a situation where the quantity demanded exceeds the quantity supplied, and so while the price of rent-controlled apartments is lower, good luck in finding a vacancy!

The slightly more sophisticated insight is what I call in my own intro textbook the problem of \”many margins for action.\” (Of course, if you are teaching an intro econ class, I encourage you to take a look at my Principles of Economics textbook, a high quality and lower-cost alternative to the big publishers, available here.)  Landlords who face rent control legislation can skimp on maintenance, or hunt for ways to force the renter to bear additional fees or costs. If a large number of landlords act in this way, the feeling of the neighborhood and property values for homes that are not rentals may be affected, too.

Cambridge, Massachusetts, has a rent control law in place from 1970 to 1994. It was ended by a statewide vote that barely squeaked out a 51%-49% majority–and ended despite the fact that Cambridge residents favored the continuation of the law by a 60%-40% majority. The law placed limits on rents for all rental properties in Cambridge built in 1969 or earlier. In \”Housing Market Spillovers: Evidence from the End of Rent Control in Cambridge, Massachusetts,\” David H. Autor, Christopher J. Palmer, and Parag A. Pathak look at what happened. (The paper is published as NBER Working Paper 18125. These working papers are not freely available on-line, but many in academia will have access through institutional memberships. Full disclosure: David Autor is editor of my own Journal of Economic Perspectives, and thus my boss.) Autor, Palmer, and Pathak have data on rents and prices in both controlled rental buildings, uncontrolled rental buildings, and owner-occupied housing. They can also make comparisons to neighboring suburbs that did not have rent controls in place. Here are a few of their more striking findings:

— The rent-controlled buildings in Cambridge, Mass., typically had rents 25%-40% below the level of uncontrolled rental buildings nearby. However, the maintenance of rent-controlled building was often subpar, with a higher incidence of issues like holes in walls or floors, chipped or peeling paint, loose railings, and the like. More broadly, owners of rent-controlled properties had no incentive to do any major fix-ups or renovations, because they would be unable to recoup the costs.

— Rent control laws are still easy to find, if not exactly widespread, in the United States. For example, \”New York City’s system of rent regulation affects at least one million apartments, while cities such as San Francisco, Los Angeles, Washington DC, and many towns in California and New Jersey have various forms of rent regulation.\”

— Not surprisingly, the end of rent control in 1995 meant that prices of the buildings that had formerly been rent-controlled rose. \”Our statistical analysis also indicates that rent controlled properties were valued at a discount of about 50 percent relative to never-controlled properties with comparable characteristics in the same neighborhoods during the rent control era, and that the assessed values of these properties increased by approximately 18 to 25 percent after rent control ended.\”

—  More surprising, it turns out that the end of rent control raised the value of all the non-controlled properties in Cambridge, too. Properties that were in a neighborhood with a higher percentage of rent-controlled properties increased in value by more than those in neighborhoods with a lower percentage of rent-controlled properties. Indeed, when rent control ended, the gains to owners of  uncontrolled properties were greater in total than the gains to the owners of rent-controlled properties. \”The economic magnitude of the effect of rent control removal on the value of Cambridge’s housing stock is $1.8 billion. We calculate that positive spillovers from decontrol added $1.0 billion to the value of the never-controlled housing stock in Cambridge, equal to 10 percent of its total value and one-sixth of its appreciation between 1994 and 2004. Notably, direct effects on decontrolled properties are smaller than the spillovers. We estimate that rent control removal raised the value of decontrolled properties by $770 million, which is 25 percent less than the spillover effect.\”

Taking all of this together, it seems to me like the way to think about rent control–at least in the form that it was enacted in Cambridge, Mass.– is that it creates a situation of low-quality and poorly-maintained housing stock, which then rents for less than uncontrolled properties. If the goal of public policy is to create lower-quality and more affordable housing, there are other ways to accomplish that goal. For example, zoning laws could require that rental complexes include a mixture of regular and small-sized rental apartments, so that the small-sized (and thus \”lower quality\”) apartments would rent for less. Or those with lower incomes could just receive housing vouchers.

But when rent control is enacted in a way that leads to degradation of a substantial portion of the housing stock, the costs are not just carried by landlords of those rent-controlled apartments. In fact, a majority of the costs may be as a result of spillover effects to real estate that isn\’t rent-controlled. When a substantial proportion of the houses in a neighborhood are not well-maintained, everyone\’s housing prices will suffer.