The Excessive Sameness of Politics and Hotelling\’s Main Street

A lot of politicians may sound like they have differentiated view on the campaign trail, but either during the campaign or after being elected, they seem to become homogenized and squishy in their views. Thus, many voters of all political dispositions are continually frustrated because they feel as if all politicians are discomfortingly alike. Maybe you want to vote for someone who isn\’t a hanging-off-the-ideological-cliff extremist, but you would like to vote for someone with clear and definite views–even if you differ with some of those views. To quote a phrase associated with the Goldwater presidential campaign of 1964, but applicable to all sides of the political spectrum, many voters feel that they want \”a choice, not an echo.\”

A famous long-ago economist named Harold Hotelling proposed a classic explanation for this phenomenon back in a paper called \”Stability in Competition,\” published in the March 1929 issue of the Economic Journal (39:153, pp. 41-57).

In one of his illustrations, Hotelling discussed the of two sellers of a product who are thinking about where to locate along Main Street. For simplicity, imagine that the addresses along the street are numbered from 1-100. The working assumption is that customers are spread evenly along Main Street, and the customers will go to whichever store is located closer to them. In this situation, if one store locates at, say, 10 Main Street, the other store will then choose to at 11 Main Street. The first store will then get all the customers from 0-10, and the second store will get all the customers from 11-100. The first store will then relocate to 12 Main Street, to snag the majority of customers, and the two stores will keep leap-frogging each other and relocating until they end up located side by side, right in the middle of Main Street.

As Hotelling pointed out back in 1929, this clustering is not ideal. From the consumers\’s point of view, it would be more useful to have the two stores located at 25 Main Street and 75 Main Street, because then no consumer along the street from 1 to 100 would be more than 25 away from a store. But the dynamics of competition can lead to excessive clustering.

Hotelling argued that this excessive sameness is apparent in many aspects of public competition, including competition between firms introducing new products, and competition between Republicans and Democrats. He wrote:

\”Buyers are confronted everywhere with an excessive sameness. When a new merchant or manufacturer sets up shop he must not produce something exactly like what is already on the market or he will risk a price war … But there is an incentive to make the new product very much like the old, applying some slight change which will seem an improvement to as many buyers as possible without ever going far in this direction. The tremendous standardisation of our furniture, our houses, our clothing, our automobiles and our education are due in part to the economies of large-scale production, in part·to fashion and imitation. But over and above these forces is the effect we have been discussing, the tendency to make only slight deviations in order to have for the new commodity as many buyers of the old as possible, to get, so to speak, between·one\’s competitors and a mass of customers.

So general is this tendency that it appears in the most diverse fields of competitive activity, even quite apart from what is called economic life. In politics it is strikingly exemplified. The competition for votes between the Republican and Democratic parties does not lead to a clear drawing of issues, an adoption of two strongly contrasted positions between which the voter may choose. Instead, each party strives, to make its platform as much like the other\’s as possible. Any radical departure would lose many votes, even though it might lead to stronger commendation of the party by some who would vote for it anyhow. Each candidate \” pussyfoots,\” replies ambiguously to questions, refuses to take a definite stand in any controversy for fear of losing votes. Real differences, if they ever exist, fade gradually with time though the issues may be as important as ever. The Democratic party, once opposed to protective tariffs, moves gradually to a position almost, but not quite, identical with that of the Republicans. It need have no fear of fanatical free-traders, since they will still prefer it to the Republican party, and its advocacy of a continued high tariff will bring it the money and votes of some intermediate groups.

Of course, it\’s not literally true that Republican and Democratic politicians both locate exactly in the middle of the political spectrum. Hotelling was describing a tendency to push to the middle, but in politics, there is also a need to assure your voters that you share their beliefs. Thus, there\’s a saying that American politics is a battle fought between the 40-yard lines. (For those unfamiliar with the line markers on an American football field, the statement suggests that the political battle is fought between the addresses of 40 and 60 on a Hotelling-style Main Street.) Mainstream politicians thus face a continual dynamic where they seek to reassure their more ardent partisans that they are on their side, while shading and tacking as needed to pick up voters in the middle. At an intuitive level, politicians recognize that offering \”a choice, not an echo\” is part of what led Barry Goldwater to a loss of historic magnitude in the 1964 US presidential election.

Political competition that is usually between centrists, whether right-of-center or left-of-center, does have some benefits. Extremists are much less likely to win high office. And even when the other side wins, it\’s reassuring to think that the person who won is at least closer to the center than the true believers at the extreme of that side. But every now and then, many of us yearn for a few more conviction politicians, who say what they mean and mean what they say, who play a greater role in driving the public debate, and who are OK with the possibility that doing so might end up costing them an election.

[For the record, using  the metaphor of a football field to describe the range of political choice seems to have originated with the 1970 best-seller The Real Majority: An Extraordinary Examination of the American Electorate, by Ben Wattenberg and Richard M. Scammon. But they used the image to discuss how political conflict might sometimes be between those near the middle and sometimes between those at with more extreme positions. The claim that American politics usually  happens between the 40 yard-lines is one of those statements that seems to have evolved afterwards, without a clear single author.]

Is Better Communication Longer and More Complex?

Twenty years ago, when the Federal Reserve Open Market Committee wanted to change interest rates, it didn\’t make any announcement. It just took action, and market participants observe those actions. Mark Wynne of the Federal Reserve Bank of Dallas explains in “A Short History of FOMC Communication”:

The first time the FOMC issued a statement immediately after a meeting explaining what action had been decided was on Feb. 4, 1994. That statement simply noted that the committee decided to “increase slightly the degree of pressure on reserve positions” and that this was “expected to be associated with a small increase in short-term money market interest rates.” By way of explanation for why the committee was announcing its decision, the statement said that this was being done “to avoid any misunderstanding of the committee’s purposes, given the fact that this is the first firming of
reserve market conditions by the committee since early 1989.” In February 1995, the committee decided that all changes in the stance of monetary policy would be announced after the meeting.

But over the years, these announcements of Fed policy have become longer and more complex. Rubén Hernández-Murillo and Hannah Shell of the Federal Reserve Bank of Cleveland have created a vivid figure to show the change in \”The Rising Complexity of the FOMC Statement.\” The colors of the circles show who was leading the Fed at the time: blue for Greenspan, red for Bernanke, and green for Yellen. The area of the circle shows the number of words in the statement: clearly, the statements have been getting wordier over time. And on the vertical axis, the FOMC statements were run through a standard diagnostic tool for determining their \”reading grade level.\” In short, the statement back in the mid-1990s were often pitched at about a 12th grade level. But over time, and especially after the financial crisis hit, the FOMC statements ratcheted up to a \”19th grade\” level, which is to say that they were pitched at readers with post-college graduate study.

This trend raises a question last spotted among the \”advice to the lovelorn\” columnists: Is communication better if it is longer and more complex? As an editor, I confess that I\’m suspicious of length and complexity. A wise economist friend used to point out to me that in academia, specialized terminology always serves two purposes: it streamlines and simplifies communication among specialists, and it shuts out nonspecialists. Of course, all academics like to believe that we are only using specialized terminology for the loftiest of intellectual purposes, not because we are as much of an in-group as an set of gossiping teenagers, with our own slang devised to define membership in the group and to separate ourselves from others.

But even my cynical side remembers a fundamental rule of exposition often attributed to Albert Einstein is that \”Everything should be made as simple as possible, but not simpler.\”

It make sense that as the Federal Reserve statements became longer and more complex as the Fed began to specify a numerical range for its interest rate policies in the late 1990s; and then began to describe how it saw future risks in 1999; and then began to specify how quickly it expected to adjust future monetary policy in 2004; and then began its \”quantitative easing\” policy in 2008.

In December 2012, as Wynne points out, the Fed altered its communication substantially by saying that “this exceptionally low range for the federal funds rate will be appropriate at least as long as the unemployment rate remains above 6½ percent, inflation between one and two years ahead is projected to be no more than a half percentage point above the committee’s 2 percent longer-run goal and longer-term inflation expectations continue to be well anchored.” In other words, the Fed for the first time had announced that it would keep interest rates at a certain level until a certain economic statistic–the unemployment rate–had moved in a certain way. But then when the unemployment rate fell beneath 6.5% in April 2014, this earlier statement had led to expectations that the Fed would then start raising interest rates, which as it turned out, the Fed wasn\’t yet quite ready to do.

How the Federal Reserve and other major central banks carry out their policies has been fundamentally transformed during the last six years. Explaining these changes is important. But if the explanations are pitched at a level that only makes sense to PhD economists, they aren\’t much help. And as the personal advice columnists remind us, if someone is talking and talking but not giving you a straight answer that you can understand, you have some reason to mistrust whether they know their own mind–and even whether they are really trying to tell you the truth. I\’ll give the last word here to Hernández-Murillo and Shell:

As the Fed returns to using conventional monetary policy tools, it is likely that the reading levels of its statements will decline. However, if the Fed continues to use unconventional instruments for a considerable period, it may need to consider how to explain its policy actions in simpler terms to avoid volatility in financial markets. 

Should Voting be Compulsory?

Just to put my cards face up on the table right here at the start, I\’m not in favor of compulsory voting. But I think the case for doing so is stronger than commonly recognized. Let me lay out the arguments as I see them: low turnover, what the penalties look like in some other countries for not voting, the free speech/constitutional issues, and whether any resulting differences in outcomes would be desirable.

[This post was originally published on Election Day, November 6, 2012.]

The starting point for making it compulsory to vote begins with the (arguable) notion that democracy would be better-served if participation in elections was higher. Here\’s a figure from a post of mine a couple of months ago on \”Voter Turnout Since 1964.\” With some variation across age groups, voter turnout in presidential elections has been sagging over the last few decades.


Some nations have responded to concerns over low voter turnout by passing laws that make it a requirement to vote. Here\’s a list of countries with such laws, and the penalties that they impose for not voting, taken from a June 2006 report from Britain\’s Electoral Commission. The penalties are categorized from \”Very Strict\” to \”None.\” But honestly, even the \”Very Strict\” is not especially onerous.



In talking with people on this subject, I\’ve found that one immediate response is that that compulsory voting must be a violation of freedom of free speech in some way. I have some of this reaction myself. But while one may reasonably oppose the idea of compulsory voting, the case that it violates a specific law or constitutional right is difficult to make. Indeed, the original 1777 constitution of the state of Georgia specifically called for a potential penalty of five pounds for not voting–although it also allowed an exception for those with a good explanation. If the U.S. government can require you to pay money for taxes, or compel you to serve on jury duty, or institute a military draft, it probably has the power to require that you show up and vote. Of course, a compulsory voting law would almost certainly include provisions for conscientious objectors to voting, and you would be permitted to turn in a totally blank ballot if you wish. The penalties for not voting would be an inconvenience, but far from draconian.

For a review of the various legal and constitutional ins and outs of compulsory voting, along with some of the practical arguments, I recommend this anonymous 2007 note in the Harvard Law Review, called \”The Case for Compulsory Voting.\”

The author points out (footnotes omitted): \”Approximately twenty-four nations have some kind of compulsory voting law, representing 17% of the world’s democratic nations. The effect of compulsory voting laws on voter turnout is substantial. Multivariate statistical analyses have shown that compulsory voting laws raise voter turnout by seven to sixteen percentage points.\”

The anonymous author also offers what seem to me ultimately the two strongest arguments for compulsory voting. The first argument is that a larger turnout will (arguably) provide a more accurate representation of what the public wants, and in that sense will strengthen the bond between the electorate and its elected representatives. The second and more subtle argument is that compulsory voting would mean that political parties could focus much less on voter turnout. Less money and effort could go into turning out the vote, and more into persuasion. Those who now vote almost certainly have stronger partisan feelings, on average, than those who don\’t vote. So politicians aim their advertisements and strategies at that more partisan group. Many negative campaign ads attempt to reduce turnout for a candidate: if turnout was high, the usefulness of such negative ads could be diminished. A broader spectrum of voters would push candidates to offer a broader spectrum of messages to appeal to those voters, and groups that now have low turnout would find themselves equally courted by politicians.

The question becomes whether these potential benefits to the democracy as a whole are worth the imposition of compulsory voting. The anonymous writer in the Harvard Law Review offers what is surely meant to be an attention-grabbing and paradoxical-sounding conclusion: \”Although there are several legal obstacles to compulsory voting, none of them appear to be substantial enough to bar compulsory voting laws. … The biggest obstacle to compulsory voting is the political reality that compulsory voting seems incompatible with many Americans’ notions of individual liberty. As with many other civic duties, however, voting is too important to be left to personal choice.\”

How might one respond to these arguments? Perhaps the most obvious answer is that if one looks at the countries that have compulsory voting–say, Brazil, Australia, Peru, Thailand–it\’s not obvious that their politics are characterized by greater appeals to the nonpartisan middle, or that the bond between the population and its elected representatives is especially strong.

For a more detailed deconstruction , I recommend a 2009 essay by Annabelle Lever in Public Reason magazine, \”Is Compulsory Voting Justified?\” Basically, her argument comes down to a belief that the potential gains from compulsory voting are unproven and unsupported by evidence in countries that have tried it, while the lost freedom from compulsory voting would be definite and real.

In Lever\’s view, the evidence that exists doesn\’t show that political parties start competing for the middle in a different way, nor that outcomes are different. For example, northern European social democratic countries like Sweden don\’t have compulsory voting, and do have declining voter turnout.
f people are disinterested or disillusions and don\’t want to vote for the existing candidates, it\’s not clear that threatening them with a criminal offense for not voting will build connections from the population to elected representatives. If political parties don\’t need to focus on turnout, they will immediately turn to other ways of identifying swing groups and wedge issues. The penalties for not voting may not look large in some broad sense, but be clear: when we enter the realm of compulsory voting, we are talking about criminal behavior. will need to decide how large the fines or other penalties will be, and what happens to those (and there will be some!) who refuse to pay. If not voting is a crime, we will be making a lot of people into criminals–maybe guilty of only a minor crime, but still recorded in our information-technology society as breaking the law. It is by no means clear that having aright to vote should be reinterpreted as having a legal duty to vote: there are many rights that one may choose to exercise, or not, as one prefers. In a free society, the right to be left alone has some value, too. Lever concludes:

\”I have argued that the case for compulsory voting is unproven. It is unproven because the claim that compulsion will have beneficial results rests on speculation about the way that nonvoters will vote if they are forced to vote, and there is considerable, and justified, controversy on this matter. Nor is it clear that compulsory voting is well-suited to combating those forms of low and unequal turnout that are, genuinely, troubling. On the contrary, it may make them worse by distracting politicians and voters from the task of combating persistent, damaging, and pervasive forms of unfreedom and inequality in our societies.

\”Moreover, I have argued, the idea that compulsory voting violates no significant rights or liberties is mistaken and is at odds with democratic ideas about the proper distribution of power and responsibility in a society. It is also at odds with concern for the politically inexperienced and alienated, which itself motivates the case for compulsion. Rights to abstain, to withhold assent, to refrain from making a statement, or from participating, may not be very glamorous, but can be nonetheless important for that. They are necessary to protect people from paternalist and authoritarian government, and from efforts to enlist them in the service of ideals that they do not share. Rights of non-participation, no less than rights of anonymous participation, enable the weak, timid and unpopular to protest in ways that feel safe and that are consistent with their sense of duty, as well as self-interest. … People must, therefore, have rights to limit their participation in politics and, at the limit, to abstain, not simply because such rights can be crucial to prevent coercion by neighbours, family, employers or the state, but because they are necessary for people to decide what they are entitled to do, what they have a duty to do, and how best to act on their respective duties and rights.\”

 I don\’t know of any recent polls on how Americans feel about compulsory voting, but a 2004 poll by ABC News found 72% opposed–a slightly higher percentage than a poll taken 40 years earlier on the same subject. These kinds of results from nationally representative polls pose an additional level of irony. If Americans as a group are strongly opposed to laws that would require compulsory voting, it seems problematic to glide around this opposition into an argument that, really, although they don\’t know it yet, they would feel better off with compulsory voting.

In a 2004 essay on compulsory voting (in this volume), Maria Gratschew points out that a number of countries in western Europe that used to have compulsory voting have have moved away from it in recent decades: Austria, Italy, Greece, and Netherlands. In discussing the decision by Netherlands to drop its compulsory voting laws in 1967, Gratschew writes: \”A number of theoretical as well as practical arguments were put forward by the committee: for example, the right to vote is each citizen\’s individual right which he or she should be free to exercise or not; it is difficult to enforce sanctions against non-voters effectively; and party politics might be livelier if the parties had to attract the voters\’ attention, so that voter turnout would therefore reflect actual participation and interest in politics.\”

Compulsory voting is one of those intriguing roads that looks better when not actually traveled.

Stanley Fischer: How Far on Financial Reform?

In the aftermath of the Great Recession, it seemed blindingly clear that the financial sector needed reform. But what kind of reform was needed, and what progress has been made? Stanley Fischer takes on these questions in the 2014 Martin Feldstein Lecture, \”Financial Sector Reform: How Far Are We?\” published in the NBER Reporter (2014, vol. 3).

Everyone in academic economics knows who Fischer is, but for those outside, he\’s a man with an
extraordinary resume. He is currently vice-chairman of the Federal Reserve. Before that, he was governor of the Bank of Israel for eight years. Before that, at various times, he was chief economist at the World Bank, first deputy managing director of the International Monetary Fund, vice chairman at Citibank, and a very prominent economics professor at MIT. His Fischer\’s quick summary of the nine main items on the financial sector reform agenda:

Several financial sector reform programs were prepared within a few months after the Lehman Brothers failure. These programs were supported by national policymakers, including the community of bank supervisors. The programs – national and international – covered some or all of the following nine areas: (1) to strengthen the stability and robustness of financial firms, \”with particular emphasis on standards for governance, risk management, capital and liquidity\” (2) to strengthen the quality and effectiveness of prudential regulation and supervision; (3) to build the capacity for undertaking effective macroprudential regulation and supervision; (4) to develop suitable resolution regimes for financial institutions; (5) to strengthen the infrastructure of financial markets, including markets for derivative transactions; (6) to improve compensation practices in financial institutions; (7) to strengthen international coordination of regulation and supervision, particularly with regard to the regulation and resolution of global systemically important financial institutions, later known as G-SIFIs; (8) to find appropriate ways of dealing with the shadow banking system; and (9) to improve the performance of credit rating agencies, which were deeply involved in the collapse of markets for collateralized and securitized lending instruments, especially those based on mortgage finance.

In his talk, he focuses on three of these items: \”Rather than seek to give a scorecard on progress on all the aspects of the reform programs suggested from 2007 to 2009, I want to focus on three topics of particular salience mentioned earlier: capital and liquidity, macroprudential supervision, and too big to fail.\”

Capital ratios have been substantially increased, which means that banks will in the future have a bigger buffer the next time their loan portfolio turns unexpectedly bad. Here\’s Fischer:

The bottom line to date: The capital ratios of the 25 largest banks in the United States have risen by as much as 50 percent since the beginning of 2005 to the start of this year, depending on which regulatory ratio you look at. For example, the tier 1 common equity ratio has gone up from 7 percent to 11 percent for these institutions. The increase in the ratios understates the increase in capital because it does not adjust for tougher risk weights in the denominator. In addition, the buffers of HQLAs [high-quality liquid assets] held by the largest banking firms have more than doubled since the end of 2007, and their reliance on short-term wholesale funds has fallen considerably. At the same time, the introduction of macroeconomic supervisory stress tests in the United States has added a forward-looking approach to assessing capital adequacy, as firms are required to hold a capital buffer sufficient to withstand a several-year period of severe economic and financial stress. The stress tests are a very important addition to the toolkit of supervisors, one that is likely to add significantly to the quality of financial sector supervision.

The proposed changes in macroprudential regulation (a concept previouslyintroduced and discussed here and here on this blog) are at best incomplete so far.

As is well known, the United Kingdom has reformed financial sector regulation and supervision by setting up a Financial Policy Committee (FPC), located in the Bank of England; the major reforms in the United States were introduced through the Dodd-Frank Act, which set up a coordinating committee among the major regulators, the Financial Stability Oversight Council (FSOC). Don Kohn … sets out the following requirements for successful macroprudential supervision: to be able to identify risks to financial stability, to be willing and able to act on these risks in a timely fashion, to be able to interact productively with the microprudential and monetary policy authorities, and to weigh the costs and benefits of proposed actions appropriately. Kohn\’s cautiously stated bottom line is that the FPC is well structured to meet these requirements, and that the FSOC is not. In particular, the FPC has the legal power to impose policy changes on regulators, and the FSOC does not, for it is mostly a coordinating body.

What about the efforts to address the \”too big to fail\” problem, where large financial firms that have taken big risks and earlier earned large profits then need to be bailed out by the government, because they are so large and so interconnected to the rest of the economy that their failure could lead to even larger public costs?

One can regard the entire regulatory reform program, which aims to strengthen the resilience of banks and the banking system to shocks, as dealing with the TBTF [too big to fail] problem by reducing the probability that any bank will get into trouble. There are, however, some aspects of the financial reform program that deal specifically with large banks. The most important such measure is the work on resolution mechanisms for SIFIs, including the very difficult case of G-SIFIs [global systemically important financial institutions]. In the United States, the Dodd-Frank Act has provided the FDIC with the Orderly Liquidation Authority (OLA) – a regime to conduct an orderly resolution of a financial firm if the bankruptcy of the firm would threaten financial stability. …

Work on the use of the resolution mechanisms set out in the Dodd-Frank Act, based on the principle of a single point of entry, holds out the promise of making it possible to resolve banks in difficulty at no direct cost to the taxpayer — and in any event at a lower cost than was hitherto possible. However, work in this area is less advanced than the work on raising capital and liquidity ratios. … [P]rogress in agreeing on the resolution of G-SIFIs and some other aspects of international coordination has been slow. … 

What about simply breaking up the largest financial institutions? Well, there is no \”simply\” in this area. … Would a financial system that consisted of a large number of medium-sized and small firms be more stable and more efficient than one with a smaller number of very large firms? … That is not clear, for Lehman Brothers, although a large financial institution, was not one of the giants — except that it was connected with a very large number of other banks and financial institutions. Similarly, the savings and loan crisis of the 1980s and 1990s was not a TBTF crisis but rather a failure involving many small firms that were behaving unwisely, and in some cases illegally. This case is consistent with the phrase, \”too many to fail.\” Financial panics can be caused by herding and by contagion, as well as by big banks getting into trouble. In short, actively breaking up the largest banks would be a very complex task, with uncertain payoff.

Fischer\’s overall tone seems to me cautiously positive about how financial reform has proceeded. My own sense is more negative. By Fischer\’s accounting, capital ratios have clearly improved, but macroprudential regulation and avoiding too big to fail remain works in progress.

As I have pointed out  before on this website, the Dodd-Frank Act of 2010  required the passage of 398 rules, and a little more than half of those rules have been finalized in the last few years. But to be clear, a \”finalized\” rule doesn\’t actually mean that businesses have figured out how to comply with the rule, and the rule itself may still be under negotiation. In certain areas from Fischer\’s list of rule, like #8 concerning the risks of shadow banking (for discussion, see here and here) and #9 concerning credit rating agencies (for discussion, see here and here), Dodd-Frank required almost no changes at all. I understand that financial reform is like changing the course of an enormous ocean liner, not like steering a bicycle. But we are now six years past the worst of the crisis in 2008, and much remains to be done.

Oikonomia, Revisited

My knowledge of ancient Greek is not significantly different from zero, but every student of economics at some point runs into oikonomia, the root word from which economy and economics were later derived. For example, in describing the etymology of \”economy,\” the Oxford English Dictionary writes that it derives from \”ancient Greek οἰκονομικός practised in the management of a household or family, thrifty, frugal, economical.\” 

To the modern ear, the idea of economics as having roots in \”household management\” makes some intuitive sense: after all, a number of modern economic models are built on the idea of a household that seeks to maximize its utility, subject to constraints of income and/or time. However, Dotan Leshem suggests that this easy parallel between what the Greeks meant by household management and modern microeconomics is rather misleading. He provides some additional context and insight for the term in \”Oikonomia Redefined,\” which appeared in the Journal of the History of Economic Thought, March 2013 (35: 1, pp 43 – 61). The journal is not freely available on-line, but many readers will have access through a library subscription. Leshem describes a work written by Xenophon, roughly dated to about 360 BC:

The first to propose a definition of oikonomia—the management and dispensation of a household—was Xenophon. He did so in the concluding chapter of the theoretical dialogue of the Oikonomikos (the Oikonomikos is composed of two dialogues: the first is theoretical, while the second focuses on the art of oikonomia. … Xenophon’s definition is composed of four building blocks, or sub-definitions: i) oikonomia as a branch of theoretical knowledge; ii) the oikos as the totality of one’s property; iii) property as that which is useful for life; and iv) oikonomia as the knowledge by which men increase that which is useful for life. Clarifying the meanings of the sub-definitions by a close reading of ancient Greek texts will allow me to argue that the ancient Greek philosophers understood oikonomia as encompassing any activity in which man, when faced with nature’s abundance or excess, acquires a prudent disposition that is translated into practical and theoretical knowledge, in order to comply with his needs and generate surplus. The surplus generated allows man to practice extra economic activities such as politics and philosophy. Excess in the definition proposed is an attribute of nature, which is assumed to be able to meet everyone’s needs and beyond, if economized prudently. Surplus, on the other hand, is the product of people’s prudent economizing of nature’s excess that is not used for securing existence.

To get a grip on this earlier notion, it\’s useful to remember that the idea of a \”household\” was an expansive one in the times of ancient Greece: indeed, it included all property of the well-to-do in a way that also encompassed what we would now think of as production and firms. This broad idea of the household was called the oikos, which was then viewed as a part of the polis, or public sphere.

The fundamental economic challenge, according to the ancient Greeks, was to strike a balance. On one side, too little emphasis on economics meant an inability to provide the surplus that would enable some people to live the good life of politics and philosophy. On the other side, too great an emphasis on economics could lead to a pursuit of luxurious living, an outcome which also would interfere with pursuit of the good life. Here\’s how Leshem describes Xenophon on this subject:

In the same vein, Xenophon, who explored the nature of wealth in the first two chapters of the Oikonomikos, is also preoccupied with setting the right limits to engagement in economics without directing surplus either into luxury or back into the economy. He does so by presenting two obstacles to man’s accumulation of wealth. Both are the outcome of self-enslavement to excessive desires instead of need satisfaction. The first takes place when someone is immersed in non-economic activities that prevent him from ‘‘engaging in useful occupations,’’ meaning that he is wholly taken up with activities that prevent him from economizing his life. Such total avoidance of economizing, in its meaning of utilizing usable things, is presented as a sort of bondage. Put differently, evading a prudent disposition causes the loss of the conditions enabling a good and happy life. The second obstacle to wealth accumulation arises when one immerses in the economic sphere, having enslaved oneself to desires:

[Xenophon writes:] \”And these too, are slaves, and they are ruled by extremely harsh masters. Some are ruled by gluttony, some by fornication, some by drunkenness, and some by foolish and expensive ambitions which rule cruelly over any men men they get into their power, as long as they see that they are in their prime and able to work . . . mistresses such as I have described perpetually attack the bodies and souls and households all the time that they dominate them.\”

This second kind of self-enslavement is not to be found in avoidance of economic
activities and the lack of prudent disposition when using things. Instead, it is to be
found in the failure to set boundaries to the economic sphere and, as a consequence, to fully immerse oneself in it. Such full immersion is presented as a lack of ability
to generate extra-economic surplus. It is here where the other side of oikonomia’s
definition as prudent conduct makes its appearance: besides utilizing the thing
acquired for the sake of existence, its prudent use generates extra-economic surplus.

This modern notion of economics is rooted in the idea that we all face scarcity of time, money, and energy, and thus need to make decisions involving tradeoffs. As Leshem points out, this ancient Greek notion is actually more rooted in an idea that we face an issue of abundance: how to bring that abundance to fruition, and how to prevent ourselves from giving into luxurious living. Because the purpose of economic life is to create this extra-economic surplus, the goal can be achieved either by increasing production, or by keeping the level of consumption low enough that a comfortable surplus will persist. Leshem writes;

 \”As can be seen, the problem arising when supplying the needs of the oikos is not how to deal with scarce means. It is, rather, how to set a limit to engaging in economic matters altogether, since nature possess excessive means that can supply all of people’s natural needs, as well as their unnatural desires. On the other hand, if economized prudently, this excess can be used to generate surplus. It can supply the needs of all the inhabitants of one’s oikos or polis, and free some of its members from engaging in economic matters to experience the good life, which is extra-economic.\”

Pulling these various elements together, Leshem sums up Xenophon\’s definition in this way (Greek words and page numbers omitted):

Xenophon’s definition of the oikonomia, as ‘‘a branch of theoretical knowledge . . . by which men can increase household . . . which is useful for life . . .’’  can be reformulated into: oikonomia is the prudent management of the excess found in man and nature in order to allow the practice of a happy life with friends, in politics, and in philosophy. We can see that the two definitions are interchangeable; oikonomia is the management of the oikos. The oikos itself equals wealth, in turn to be defined as everything useful for life. …  The definition of wealth is compatible with the definition of the oikonomia as the prudent management of needs satisfaction in order to generate surplus leisure time (which was perceived by Aristotle as a precondition for the attainment of happiness).

Up to this point, the notion of economics and the \”good life\” of politics or philosophy seem like a conception limited to the upper sliver of property owners. Indeed, there is an interpretation of Greek economic philosophy often associated with Aristotle, which still has echoes today, which holds that economic life is by definition without virtue, and true human virtue comes only from noneconomic areas of life like public life and philosophy. Taken to an extreme, this view would hold that those who put too much time and energy into work cannot be virtuous. An alternative modern view, often associated with John Locke, holds that economic work is a transformative interaction with the natural world through which humans create their own autonomy and virtue. (Here is an essay of my own thoughts on interactions between economics and virtue.) 

Leshem argues that while scholars of ancient Greek economic thought used to see a disjunction between economic life and a virtuous life, with the virtues of politics and philosophy largely limited to those in high positions, the more recent literature has take an broader view. He writes:

\”In addition, contemporary literature persuasively presents the oikos as a diversified domain in which there exist all kinds of human relations besides despotic ones. They stress the friendship between husband and wife, it being for the sake of happiness and not just as a means to support the polis, the role of education of children within the household, the different kinds of slaves, the use of other means of communication beside violence, and the household’s existence in and for itself. In this depiction, not only the master, but many participants in the household, can demonstrate virtue, doing so within its bounds.\”

Overall, the concept of economic life among the ancient Greeks was not built on individuals and firms interacting in markets for goods, labor and capital. It is not built on a fundamental issue of facing scarcity and decisions that involve tradeoffs. Instead, the building block is the role of the household or oikos as a building block for society as a whole, and for enabling people to live a virtuous life outside the economic realm. Here is how Leshem sums up the difference between the economic theory of the ancient Greeks and modern economic theory:

[T]he differences and the resemblance between contemporary and ancient Greek economic theory are rather marked. Both define the economic sphere by the disposition people demonstrate (in the former—of prudence; in the latter—of rationality), which is translated into the theoretical and practical knowledge people demonstrate in economic activity. Moreover, both say that everything that people utilize in order to satisfy their needs/desires and to generate surplus is part of the economic domain, and not (just) material wealth. But, while in ancient economic theory, acquiring this disposition was seen as the expression of an ethical choice, in contemporary theory, the individual’s doing so is taken as a given, so that people’s rational disposition can be inferred from their revealed preferences. 

The two economic theories embody distinct ontologies: while ancient economic theory held that humans face abundance and excessive means in the economic domain, contemporary economists hold that it is only scarce means that are available there. As for the designation of the surplus generated, in the ancient theory it is a surplus of leisure time that allows the master/citizen to participate in politics and engage in philosophy, while in the contemporary theory … it is to be turned back to the economic domain, as a source for growth, or, as pointed out by critics of contemporary consumerism, into luxurious consumption.

The Economics of Water in the American West

Fresh water doesn\’t get used up in a global sense: that is, the quantity of fresh water on planet Earth doesn\’t change. But the way in which the world\’s fresh water is naturally distributed–by evaporation, precipitation, groundwater, lakes, and rivers and streams–doesn\’t always match where people want that water to be. The man-made systems of water distribution like dams, reservoirs, pipelines, and irrigation systems can alter the natural distribution of water to some extent. But the American West is experiencing a combination of drought that reduces the natural supply of water and rising population that wants more water. Even with drought, population pressures, and environmental demands for fresh water, there is actually plenty of water in the American Southwest–at least, if the incentives are put in place for some changes to be implement by urban households, farmers, water providers, and legislators.

For an analysis of the issues and options, a useful starting point is a set of three papers published by the Hamilton Project at the Brookings Institution:

Here\’s a figure from Culp, Glennon, and Libecap showing the U.S. drought situation, concentrated in the southwestern United States: 
These southwestern states have also experienced dramatic population growth in recent decade. Of the regions of the United States, these are the states with the highest population growth and the lowest annual rainfall even in average times. Here\’s a figure from Kearney, Harris, Hershbein, Jácome, and Nantz:
Here\’s my master plan for how to address the water shortfall, drawing on discussions in the various papers. 
1) Reduce the incentives for outdoor watering by urban households in dry states. 
If  you had to guess, would you think that urban households in dry states of the American southwest use more or less water than other states? In general, these households tend to be heavier users of water. Here\’s a figure from Kearney et al., who report (citations omitted):

Outdoor watering is the main factor driving the higher use of domestic water per capita in drier states in the West. Whereas residents in wetter states in the East can often rely on rainwater for their landscaping, the inhabitants of Western states must rely on sprinklers. As an example, Utah’s high rate of domestic water use per capita is driven by the fact that its lawns and gardens require more watering due to the state’s dry climate. Similarly, half of California’s residential water is used solely for outdoor purposes; coastal regions in that state use less water per capita than inland regions, largely because of less landscape watering . . .

There are a variety of ways to reduce outdoor use of water: specific rules like banning outside watering, or limiting it to certain times of day (to reduce evaporation); the use of drip irrigation and other water-saving technologies; and so on. For economists, an obvious complement to these sorts of steps is to charge people for water in a way where the first \”block\” of water that is used has a relatively low price, but then additional \”blocks\” have higher and higher prices. 
Here\’s a figure showing average monthly water bills across cities. Los Angeles and San Diego do rank among the cities with higher bills, although the absolute difference is not enormous. But again, the point here is not the average bill, but rather that those who use large amounts of water because they want a green lawn and a washed-down driveway should face some incentives to alter that behavior. 
2) Upgrade the water delivery infrastructure. 

One hears a lot of talk about the case for additional infrastructure spending, but much of the focus seems to be on fixing roads and bridges. I\’d like to hear some additional emphasis on how to fix up the water infrastructure system. As Ajami, Thompson, and Victor note: 

\”Water infrastructure, by some measures the oldest and most fragile part of the country’s built environment, has decayed. … Water infrastructure—including dams, reservoirs, aqueducts, and urban distribution pipes—is aging: almost 40 percent of the pipes used in the nation’s water distribution systems are forty years old or older, and some key infrastructure is a century old. On average, about 16 percent of the nation’s piped water is lost due to leaks and system inefficiencies, wasting about 7 billion gallons of clean and treated water every day …. Metering inaccuracies and unauthorized consumption also leads to revenue loss. Overall, about 30 percent of the water in the United States falls under the category of nonrevenue water, meaning water that has been extracted, treated, and distributed, but that has never generated any revenue because it has been lost to leaks, metering inaccuracies, or the like … \” 

3) Let farmers sell some of their water to urban areas. 
For historical reasons, a very large proportion of the water in many western states, but especially California, goes to agricultural uses. Some of these uses combine relatively high market value and relatively low use of water, like many fruits (including wine grapes), vegetables, and nuts. But the use of water for other crops is more troublesome.  Culp, Glennon, and Libecap go into these issues in some detail. As one vivid example, they write: \”In 2013, Southern California farmers used more than 100 billion gallons of Colorado River water to grow alfalfa (a very water-intensive crop) that was shipped abroad to support rapidly growing dairy industries, even as the rest of the state struggled through the worst drought in recorded history …\” 
There are a substantial number of legal barriers to the idea of farmers trading some water to urban areas, but the possibilities are quite striking. Here\’s a figure showing that 80% of California\’s water use goes to agriculture, with a substantial share of that going to lower-value field crops like alfalfa, rice,  and cotton. In agricultural areas, as in urban ones, there is often considerable scope for conserving water in various ways,  like targeting the use of irrigation more carefully, making sure that irrigation ditches don\’t leak while carrying water, and the like. 
Imagine for the sake of argument that it was possible with a comprehensive effort that combined shifting to different crops and water conservation efforts to reduce agricultural water use in California by one-eighth: that is, instead of using 80% of the available water, agriculture would get by with using 70% of the available water. The amount available for urban and/or environmental uses would then rise by half, from 20% to 30% of the available water. 
One approach they describe, implemented starting in 2002 in Santa Fe, New Mexico, required that any new urban construction had to find a way to offset water that would be used in that construction. 

As an example, developers could obtain a permit to build if they retrofitted existing homes with low-flow toilets. Residents of these homes welcomed the chance to get free toilets, and Santa Fe plumbers jumped at the opportunity for new business. Within a couple of years plumbers had swapped out most of the city’s old toilets with new high-efficiency ones. Water that residents would have flushed away now supplies new homes.  … In short order, a market emerged as developers began to buy water rights from farmers. Developers deposited the water rights in a city-operated water bank; when the development became shovel-ready, the developer withdrew the water rights for the project. If the project stalled, the developer could sell the rights to another developer whose project was farther along. Santa Fe also enacted an aggressive water conservation program and adopted water rates that rise on a per unit basis as households consume additional blocks of water. Thanks to the innovative water-marketing measures, the conservation program, and tiered water rates, water use per person in Santa Fe has dropped 42 percent since 1995 … 

4) Set up groundwater banks. 
Historically, most western states have allowed any property owner to drill a well and use groundwater without limit. But the groundwater reserves are slow to recharge, and with the drought and population pressures, they are under severe stress. Culp, Glennon, and Libecap explain (citations omitted):

Groundwater has been the saving grace for many parts of the water-starved West. Following the advent of high-lift turbine pump technology in the 1930s, many regions had access to vast reserves of water in underground aquifers that they have tapped to supply water when surface water supplies were inadequate. A recent study looked at data on freshwater reserves above ground and below ground across the Southwest from 2004 to 2013. It found that freshwater reserves had declined by 53 million acre-feet during this time—a volume equivalent to nearly twice the capacity of Lake Mead! The study also found that 75 percent of the decline came from groundwater sources, rather than from the better-publicized declines in surface reservoirs, such as Lake Mead and Lake Powell. Much of this decline occurred because some Western states, including California, have historically failed to regulate, or do not adequately regulate, groundwater withdrawals. As a result, groundwater aquifers are effectively being mined to provide water for day-to-day use. In response to the ongoing drought, California farmers continue to drill new wells at an alarming rate, lowering water tables to unprecedented depths …In the San Joaquin Valley of California, excessive groundwater pumping caused the water table to plummet and the surface of the earth to subside more than twenty-five feet between 1925 and 1977 …\” 

Arizona has already been taking steps toward groundwater protection, both by limiting what can be taken out and by providing incentives to save water in the form of recharging groundwater (which avoids the problem of evaporation). 

Although not yet developed into a formal exchange, Arizona has been at the cutting edge in developing groundwater recharge and recovery projects and a supporting statutory framework to help enhance the reliability of water supplies. Arizona allows municipal users, industrial users, and various private parties to store water in exchange for credits that they can transfer to other users. Because water stored underground in aquifers is not subject to evaporation, groundwater that is deliberately created through recharge activity can be stored and recovered later. This recharge and recovery approach is facilitated by Arizona laws that restrict the use of groundwater in several of the state’s most important groundwater basins; these restrictions prevent open access to the resource. Restrictions on open access, combined with statutory and regulatory provisions that allow for the creation and recovery of credits, created the essential conditions for trade in stored groundwater. As a result, numerous transactions have occurred between various municipal interests, water providers, and private parties. 

California passed legislation last month to regulate groundwater pumping for the first time. 
5) More research and development on water-saving technologies. 
As Ajami, Thompson, and Victor discuss at some length, there is relatively little innovative activity in water conservation and purification, as opposed to, say, energy conservation and new sources of energy. They argue that part of the reason is that energy-providing companies compete against each other, while most water companies are sleepy publicly-run local monopolies. Potential entrepreneurs have the ability to look at a lot of ways of producing and using energy, confident that if they come up with something useful, their invention will find a ready market. But entrepreneurs looking at various methods of water conservation will often find that their ideas apply only locally, or are hard to patent, or are hard to sell to water companies and users. Here\’s their figure comparing spending for energy and water innovation at a global and U.S. level. 
Drought is a natural problem. But the factors that determine how water is available get used represent an economic problem of the incentives and constraints that determine the allocation of a scarce resource.  In the American West, the institutional problems of water allocation seem to me even more severe than the natural problem of drought. 
* Full disclosure: I did comments and editing on the paper by Culp, Glennon, and Libecap, and was paid an honorarium for doing so. 

Political Polarization and Confirmation Bias

Election Day is coming up a week from tomorrow, on Tuesday. Are you voting for your beliefs about the beliefs, character, and skills of the candidates? Or are you voting the party line, one more time? Here\’s an article I wrote for the Star Tribune newspaper in my home state of Minnesota. I have added weblinks below to several of the studies mentioned. 

\”It\’s my belief and I\’m sticking to it:

In such a polarized atmosphere, you may want to examine pre-existing biases.\”
By Timothy Taylor

Part of the reason American voters have become more polarized in recent decades is that both sides feel better-informed.

The share of Democrats who had “unfavorable” attitudes about the Republican Party rose from 57 percent in 1994 to 79 percent in 2014, according to a Pew Research Center survey in June called “Political Polarization in the American Public.”

Similarly, the percentage of Republicans who had unfavorable feelings about the Democratic Party climbed from 68 percent to 82 percent.

Most of this increase is due to those who have “very unfavorable” views of other party. Among Democrats, 16 percent of Democrats had “very unfavorable” opinions of the Republican Party in 1994, rising to 38 percent by 2014. Among Republicans, 17 percent had a “very unfavorable” view of the Democratic Party in 1994, rising to 43 percent by 2014.

A follow-up poll by Pew in October found that those with more polarized beliefs are more likely to vote. The effort to stir the passions of the ideologically polarized base so that those people turn out to vote explains the tone of many political advertisements.

A common response to this increasing polarization is to call for providing more unbiased facts. But in a phenomenon that psychologists and economists call “confirmation bias,” people tend to interpret additional information as additional support for their pre-existing ideas.

One classic study of confirmation bias was published in the Journal of Personality and Social Psychology in 1979 by three Stanford psychologists, Charles G. Lord, Lee Ross and Mark R. Lepper. In that experiment, 151 college undergraduates were surveyed about their beliefs on capital punishment. Everyone was then exposed to two studies, one favoring and one opposing the death penalty. They were also provided details of how these studies were done, along with critiques and rebuttals for each study.

The result of receiving balanced pro-and-con information was not greater intellectual humility — that is, a deeper perception that your own preferred position might have some weaknesses and the other side might have some strengths. Instead, the result was a greater polarization of beliefs. Student subjects on both sides — who had received the same packet of balanced information! — all tended to believe that the information confirmed their previous position.

A number of studies have documented the reality of confirmation bias since then. In an especially clever 2013 study, Dan M. Kahan (Yale University), Ellen Peters (Ohio State), Erica Cantrell Dawson (Cornell) and Paul Slovic (Oregon) showed that people’s ability to interpret numbers declines when a political context is added.

Their study included 1,100 adults of varying political beliefs, split into four groups. The first two groups received a small table of data about a hypothetical skin cream and whether it worked to reduce rashes. Some got data suggesting that the cream worked; others got data suggesting it didn’t. But people of all political persuasions had little trouble interpreting the data correctly.

The other two groups got tables of data with exactly the same numbers. But instead of indicating whether skin cream worked, the labels on the table now said the data was showing a number of cities that had enacted a ban on handguns, or had not, and whether the result had been lower crime rates, or not.

Some got data suggesting that the handgun ban had reduced crime; others got data suggesting it didn’t. The data tables were identical to the skin cream example. But people in these groups became unable to describe what the tables found. Instead, political liberals and conservatives both tended to find that the data supported their pre-existing beliefs about guns and crime — even when it clearly didn’t.

In short, many Americans wear information about public policy like medieval armor, using it to ward off challenges.

Of course, it’s always easy to define others as hyperpartisans who won’t even acknowledge basic facts. But what about you? One obvious test is how much your beliefs change depending on the party of a president.

For example, have your opinions on the economic dangers of large budget deficits varied, coincidentally, with whether the deficits in question occurred under President Bush (or Reagan) or under President Obama?

Is your level of outrage about presidents who push the edge of their constitutional powers aimed mostly at presidents of “the other” party? What about your level of discontent over government surveillance of phones and e-mails? Do your feelings about military actions in the Middle East vary by the party of the commander in chief?

Do you blame the current gridlock in Congress almost entirely on the Republican-controlled House of Representatives or almost entirely on the Democratic-controlled Senate? Did you oppose ending the Senate filibuster back in 2006, when Democrats could use it to slow down the Republicans, but then favor ending the filibuster in 2014, when Republicans could use it to slow down Democrats? Or vice versa?

Do big-money political contributions and rich candidates seem unfair when they are on the other side of the political spectrum, but part of a robust political process and a key element of free speech when they support your preferred side?

Do you complain about gridlock and lack of bipartisanship, but then — in the secrecy of the ballot box — do you almost always vote a straight party ticket?

Of course, for all of these issues and many others, there are important distinctions that can be drawn between similar policies at different times and places. But if your personal political compass somehow always rotates to point to how your pre-existing beliefs are already correct, then you might want to remember how confirmation bias tends to shade everyone’s thinking.

When it comes to political beliefs, most people live in a cocoon of semi-manufactured outrage and self-congratulatory confirmation bias. The Pew surveys offer evidence on the political segregation in neighborhoods, places of worship, sources of news — and even in who we marry.

Being opposed to political polarization doesn’t mean backing off from your beliefs. But it does mean holding those beliefs with a dose of humility. If you can’t acknowledge that there is a sensible challenge to a large number (or most?) of your political views, even though you ultimately do not agree with that challenge, you are ill-informed.

A wise economist I know named Victor Fuchs once wrote: “Politically I am a Radical Moderate. ‘Moderate’ because I believe in the need for balance, both in the goals that we set and in the institutions that we nourish in order to pursue those goals. ‘Radical’ because I believe that this position should be expressed as vigorously and as forcefully as extremists on the Right and Left push theirs.”

But most moderates are not radical. Instead, they are often turned off and tuned out from an increasingly polarized political arena.

Timothy Taylor is managing editor of the Journal of Economic Perspectives, based at Macalester College in St. Paul.

Keeping Up with the Joneses on Energy Conservation

The phrase \”Keeping Up with the Joneses\” seems to have become firmly established in U.S. culture as a result of a prominent comic strip by that name which started in 1913 and ran for several decades.
Characters in the cartoon often referred to what the Joneses were doing but you never met them. Usually the term refers to a desire to imitate the higher and more conspicuous consumption levels of one\’s neighbors. But can a desire to follow the neighbors also be harnessed to energy conservation efforts?

Hunt Allcott and Todd Rogers offer evidence on that question in \”The Short-Run and Long-Run Effects of Behavioral Interventions: Experimental Evidence from Energy Conservation,\” in the October 2014 issue of the American Economic Review (104:10, pp. 3003–3037). The AER is not freely available on line, but many readers will have access through library subscriptions. (Full disclosure: The AER is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as Managing Editor.) They write:

We study a widely-implemented and highly-publicized behavioral intervention, the “home energy report” produced by a company called Opower.The Opower reports feature personalized energy use feedback, social comparisons, and energy conservation information, and they are mailed to households every month or every few months for an indefinite period. Utilities hire Opower to send the reports primarily because the resulting energy savings help to comply with state energy conservation requirements. There are now 6.2 million households receiving home energy reports at 85 utilities across the United States.

At some of the utilities, the Opower notices have been implemented as a randomized control trial, which makes it relatively straightforward to compare the behavior of those who receive the notices and those who don\’t. One can also use this data to look at questions like whether the notice might have a short-term effect that fades unless the reminders continue, or a longer-term effect that continues for a time and then fades as people get tired of receiving the notices. Here\’s an example of the front and back of a typical Opower notice:

What effect do these notices have? Allcott and Rogers report that when first receiving a notice, a number of consumers show a quick but short-term reduction in energy use. As people receive more notices, this cycle of reducing consumption and then bouncing back gets smaller. But the repetition of the message seems to have a longer-term effect after two years–that is, people\’s habits have changed in a way that lasts for several more years. In their words:

At first, there is a pattern of “action and backsliding”: consumers reduce electricity use markedly within days of receiving each of their initial reports, but these immediate efforts decay at a rate that might cause the effects to disappear after a few months if the treatment were not repeated. Over time, however, the cyclical pattern of action and backsliding attenuates. After the first four reports, the immediate consumption decreases after report arrivals are about five times smaller than they were initially. For the groups whose reports are discontinued after about two years, the effects decay at about 10 to 20 percent per year—four to eight times slower than the decay rate between the initial reports. This difference implies that as the intervention is repeated, people gradually develop a new “capital stock” that generates persistent changes in outcomes. This capital stock might be physical capital, such as energy efficient lightbulbs or appliances, or “consumption capital”—a stock of energy use habits … Strikingly, however, even though the effects are relatively persistent and the “action and backsliding” has attenuated, consumers do not habituate fully even after two years: treatment effects in the third through fifth years are 50 to 60 percent stronger if the intervention is continued instead of discontinued.

One reason for having utilities encourage energy conservation is because it can be cheaper than building additional electricity generation plants, and facing the broader social costs of the environmental costs of such plants. Electricity in the US across all sectors of the economy now costs about 11 cents per kilowatt/hour, Thus, Allcott and Rogers look at the cost-effectiveness of sending Opower reports, defined \”as the cost to produce and mail reports divided by the kilowatt-hours of electricity conserved.\”

They find that a one-shot Opower notice has has a cost effectiveness of 4.31 cents/kWh. Extending the intervention to two years costs more in sending out additional notices. But the continual reminders keep encouraging people to conserve, and also over that time people build up a pattern of reduced energy consumption. In their words, \”The two-year intervention is much more cost effective than the oneshot intervention, both because people have not habituated after the first report and because the capital stock formation process takes time.\” Thus, the overall cost-effectiveness after about two years is about 1.5 cents/kWh.

After two years, it seems possible to slack off on the reminders, and to send them perhaps twice a year rather than with every monthly bill. This step reduces the cost of sending the reminders, but seems to keep about the same level of effectiveness in terms of holding down energy consumption. Indeed, households don\’t seem to get used to the reports, but seem to keep responding to the notices even after a third and fourth year. As they write: \”However, it is remarkable how little cost effectiveness decreases after two years, suggesting strikingly little habituation.\”

I\’m not surprised that these kinds of notices about how much electricity the Joneses are using have a short-run effect. But the surprise in these results is that people keep chasing the Joneses toward greater electricity conservation even after several years of receiving these notices. It makes one wonder if there aren\’t some other ways in which information and social pressure, discreetly and privately applied, might be used in the service of environmental goals.

Antitrust Goes International

In a globalizing world economy, it\’s perhaps no surprise that mergers and acquisitions are also crossing national borders more frequently–and that the risk of cartel-like behavior across national borders is also rising. The OECD lays out some background in its recent report, \”Challenges of International Co-operation in Competition Law Enforcement.\”  As a starting point, here\’s a figure showing the overall upward pattern cross-border merger and acquisition deals in recent decades.

In some cases, these M&A deals raise eyebrows on their own. In other cases, there is a concern that companies may be acting together across borders in a cartel-like way without an explicit deal. Indeed, the number of international cartels that have been discovered has been rising in recent years. \”The number of cross-border cartels revealed in an average year has increased substantially since the early 1990s. According to the Private International Cartel (PIC) database,34 about 3 cross-border cartels were revealed via competition authority decisions or prosecutions in an average year between 1990 and 1994. In recent years, from 2007 to 2011, an average of about 16 cross-border cartels has been revealed per year.\”

The last two decades have seen a rise in the number of countries with \”competition laws\” and antitrust authorities to enforce them.

\”The spread of competition law enforcement around the world has been remarkable. At the end of the 1970s only nine jurisdictions had a competition law, and only six of them had a competition authority in place. By 1990, there were 23 jurisdictions with a competition law and 16 with a competition authority. The number of jurisdictions with competition authorities increased more than 500% between 1990 and 2013. As of October 2013, about 127 jurisdictions had a competition law, of which  120 had a functioning competition authority. … The speed and breadth of the proliferation of competition laws and competition enforcers around the globe is the single most important development in the competition area over the last 20 years.\”

A main focus of the OECD report is the issues involved in coordinating the actions of competition authorities, who often have different legal standards, and I suspect in many cases may be more confrontational toward foreign companies than toward domestic firms. The U.S. is an active player in this area.

\”According to Scott Hammond (former Deputy Assistant Attorney General for Criminal Enforcement), the Antitrust Division typically has approximately 50 international cartel investigations open at a time. Since May 1999, more than 40 foreign defendants have served, or are serving, prison sentences in the United States for participating in an international cartel or for obstructing an investigation of an international cartel. Foreign nationals from France, Germany, Japan, the Republic of Korea, Norway, the Netherlands, Sweden, Switzerland, Chinese Taipei and the United Kingdom are among those defendants. In the well-known vitamins case of 1999, for example, twelve individuals, including six European executives, were sentenced to serve time in US prisons for their role in the vitamin conspiracy. The automotive parts investigations exemplify the need for the Antitrust Division to co-operate with foreign counterparts. The investigation included search warrants executed on the same day and conducted at the same time as searches by enforcers in other countries. During the ongoing investigation the Antitrust Division coordinated with the competition law authorities of Japan, Canada, the Republic of Korea, Mexico, Australia, and the European Commission.\”

The number US Department of Justice antitrust cases with an international dimension has been rising over time. Indeed, international cases have for some years made up most of the fines collected by U.S. antitrust enforcement agencies. Here\’s a figure showing the number of non-U.S. companies prosecuted in cartel investigations by the U.S. antitrust authorities, and the share of antitrust fines received from these cases.

When companies act together to limit competition and to carve up global markets, it is just as harmful as when they do so in domestic markets–but it can be harder to monitor and enforce. The battle is already underway.

What Path for Development in Africa — and Elsewhere?

As I\’ve pointed out from time to time, the countries of  sub-Saharan Africa have been experiencing genuine conomic growth for the last decade or so, and not just in oil- and mineral-exporting countries, creating what is by the standards of developing economies an expanding middle class. But here comes Dani Rodrik to ask some realistic tough questions in his essay, \”Why an African Growth Miracle Is Unlikely,\” written for the Fourth Quarter 2014 issue of the Milken Institute Review. Rodrik also argues this case in \”An African Growth Miracle?\” given as the Richard Sabot lecture last April at the Center for Global Development.

As Rodrik readily acknowledges, many nations of Africa have seen  both sustained growth and positive reform of their economic institutions.

Sub-Saharan Africa’s inflation-adjusted growth rate, after having spent much of the 1980s and 1990s in negative territory, has averaged nearly 3 percent annually in per capita terms since 2000. This wasn’t as stellar as East Asia’s and South Asia’s performances, but was decidedly better than what Latin America, undergoing its own renaissance of sorts, was able to achieve. Moreover, the growth isn’t simply the result of a revival in foreign investment: The region has been experiencing positive productivity growth for the first time since the early 1970s. It should not be entirely surprising,
then, that the traditional pessimism about the continent’s economic prospects has been replaced by rosy scenarios focusing on African entrepreneurship, expanding Chinese investment and a growing middle class. … 

Agricultural markets have been liberalized, domestic markets have been opened to international trade, state-owned or controlled enterprises have been disciplined by market forces or closed down, macroeconomic stability has been established and exchange-rate management is infinitely better than in the past. Political institutions have improved significantly as well, with democracy and electoral competition becoming the norm rather than the exception throughout the continent. Finally, some of the worst military conflicts have ended, reducing the number of civil war casualties in recent years to historic lows for the region.

But Rodrik\’s focus is not on whether a per capita growth rate of 3% is sustainable. Given continuing investments in human capital, infrastructure investment and building better trade ties across the continent, Africa\’s economy\’s can continue to grow. The question is whether sub-Saharan African can experience a growth \”miracle\” similar to that of many nations around south and east Asia–Japan, Korea, China,  India, and others–where per capita economic growth veers up into the range of 7% per year or more, which is enough to double average living standards in a decade.

Here, Rodrik argues, Africa\’s prospects are shakier, because that kind of growth miracle requires some manner of structural transformation of the economy–and just how the nations of Africa might transform their economies in this way is quite unclear. How will Africa\’s jobs of the future be generated? Rodrik writes:

\”To generate sustained, rapid growth, Africa has essentially four options. The first is to revive manufacturing and put industrialization back on track, so as to replicate as much as possible the now-traditional route to economic convergence. The second is to generate agriculture-led growth, based on diversification into non-traditional agricultural products. The third is to kindle rapid growth in productivity in services, where most people will end up working in any case. The fourth is growth based on natural resources, in which many African countries are amply endowed.\”

The problem with the manufacturing approach is that economic through a transformation that involves low-wage manufacturing is getting harder. Rodrik writes:

On the other hand, the obstacles to industrialization in Africa may be deeper, and go beyond specific African circumstances. For various reasons we do not fully understand, industrialization has become really hard for all countries of the world. The advanced countries are, of course, deindustrializing, which is not a big surprise and can be ascribed to both import competition and a shift in demand to services. But middle-income countries in Latin America are doing the same. And industrialization in low-income countries is running out of steam considerably earlier than was the case before. This is the phenomenon that I have called premature deindustrialization.

My own thinking here is that the issue isn\’t that manufacturing itself is going away, but that industrial robots are reaching the point where setting up a high-tech highly automated manufacturing plant is looking better and better compared to setting up a plant that relies heavily on low-wage human labor.

There are certainly lots of opportunities for increased productivity in African agriculture, but at least traditionally, improvements in the agricultural sector lead to fewer people working in agriculture. Perhaps the nations of Africa can alter this dynamic by moving into food processing and specialized products with a higher value-added (like wine and cut flowers), but it\’s hard to imagine building a growth and jobs miracle on the agricultural sector.

Many of Africa\’s workers are ending up in the service sector, like workers in countries all around the world. But at least so far, the services sector has not serve as the primary basis for a growth miracle in any country. Rodrik argues that the reason is that while a low-skilled agricultural worker can make an almost immediate transition to being a low-skilled manufacturing worker, the transition to a high-growth services sector often requires a wide range of complementary inputs. He writes:

Long years of education and institution-building are required before farm workers can be transformed into software programmers, or even call-center operators. Contrast this with manufacturing, where little more than manual dexterity is required to turn a farmer into a production worker in garments or shoes, raising his or her productivity by a factor of two or three. So raising productivity in services has typically required steady, broad-based accumulation of capabilities in human capital, institutions and governance. Unlike the case of manufacturing, technologies in most services seem less tradable and more context-specific (again with some exceptions such as cellphones). And achieving significant productivity gains seems to depend on complementarities across different policy domains.

Finally, a reliance on natural resources has been a part of the economic growth of many developed economies, like the U.S. economy in the late 19th and early 20th century, as well as in countries like the United Kingdom and Norway (with North Sea oil). However, in many other cases there seems to be a \”natural resources curse\”  in which the economy ends up overly focused on natural resources in a way that weakens its underlying growth in all other sectors.

Rodrik\’s bottom line is that while the nations of sub-Saharan Africa can surely continue to experience moderate rates of economic growth, it will need to invent its own path to find a growth miracle: \”If African countries do achieve growth rates substantially higher than I have suggested is likely,
they will do so by pursuing a growth model that is different from earlier miracles, which were based on industrialization. Perhaps it will be agriculture-led growth. Perhaps it will be services. But it will look quite different than scenarios we have seen before.\”

I would add that the U.S. economy and the world face their own version of Africa\’s economic growth problem. In the U.S., the old-style manufacturing jobs have been steadily diminishing. We aren\’t likely to build the future of the U.S. economy primarily on growth in agriculture. Although the emergence of the U.S. economy as the world\’s oil and gas production leader should offer real benefits to the U.S. economy in the next couple of decades, it isn\’t likely to be enough to drive the bulk of the $17 trillion U.S. economy. The core challenge facing the U.S. economy is how to combine its own service-sector workers, especially its low- and middle-skill workers, with the new possibilities of technology in a way that leads to well-paid jobs, as well as to the kind of rising productivity and evolving skills that are behind a satisfying career path.