Credit Without Banks: Shadow Banking

One of the vivid lessons of the 2007-2009 recession and financial crisis is that in the modern economy, one can\’t just think about the financial sector as made up of banks and the stock market. Other financial institutions can go badly wrong, with dire consequences. The Global Shadow Banking Monitoring Report 2014 from the Financial Stability Board helps give a sense of these other non-banking financial institutions–especially those that sometime act in bank-like ways by receivin funds from investors, lending out those funds, and receiving interest payments.

The Financial Stability Board is an international working group that bubbled up in 2009 in the wake of the financial crisis. As it describes itself: \”The FSB has been established to coordinate at the international level the work of national financial authorities and international standard setting bodies and to develop and promote the implementation of effective regulatory, supervisory and other financial sector policies in the interest of financial stability. It brings together national authorities responsible for financial stability in 24 countries and jurisdictions, international financial institutions, sector-specific international groupings of regulators and supervisors, and committees of central bank experts.\”

The FSB starts with a measure of \”Other Financial Institutions,\” which includes \”Money Market Funds, Finance Companies, Structured Finance Vehicles, Hedge Funds, Other Investment Funds, Broker-Dealers, Real-Estate Investment Trusts and Funds.\” Here\’s a figure showing how these \”other financial institutions\” (the red and blue bars) compare in size with the banking sector (the yellow bars) in a number of countries. In the U.S., for example, the \”other financial institutions\” are bigger than the banking sector.

What\’s up with the huge size of the \”other financial institutions\” in the Netherlands? The report says: \”In the Netherlands, Special Financial Institutions (SFIs) comprise about two-thirds of the OFIs sector and thereby explain most of the size of the shadow banking sector. There are about 14 thousand SFIs, which are typically owned by foreign multinationals who use these entities to attract external funding and facilitate intra-group transactions.\” For a discussion of what some of these Dutch Special Financial Institutions are doing, you can check my post from last June on the Double Irish Dutch Sandwich. 

Obviously, not all of these \”other financial institutions\” are engaged in bank-like activities outside the banking sector. Some are acting in ways that don\’t involve bank-like activities making a loan and expecting an interest payment–for example, they may just be investing in stock of companies or in land, or working with financial derivatives that involve commodity prices or exchange rates or interest rate movements. Some of these are interconnected with banks in ways that means they are covered by the bank regulatory apparatus, So the FSB tries to subtract these activities out, and get an estimate of the \”shadow banking\” sector by itself. For the U.S., shadow banking is estimated at about half of the total of \”other financial institutions,\” at roughly $13 trillion in assets.

I\’ve discussed the concerns with shadow banking on this blog before (for example, here and here). The short story is that we learned long ago that economic instability can interact with instability in the banking sector in a way that causes economic and financial weakness to feed on each other in a vicious circle. This is why pretty much every country in the world has bank deposit insurance (to prevent bank runs) and bank regulation (to prevent banks from taking too much risk). But there are lots of \”other financial institutions\” that receive funds and make loans. This \”shadow banking\” sector too can be part of a vicious circle of economic and financial instability, as we learned from 2007-2009, and since. ere\’s the FSB explanation of what shadow banking iis and why it matters. 

The “shadow banking system” can broadly be described as “credit intermediation involving entities and activities (fully or partially) outside the regular banking system” or non-bank credit intermediation in short. Such intermediation, appropriately conducted, provides a  valuable alternative to bank funding that supports real economic activity. But experience from the crisis demonstrates the capacity for some non-bank entities and transactions to operate on a large scale in ways that create bank-like risks to financial stability (longer-term credit extension based on short-term funding and leverage). …

Like banks, a leveraged and maturity-transforming shadow banking system can be vulnerable to “runs” and generate contagion risk, thereby amplifying systemic risk. Such activity, if unattended, can also heighten procyclicality by accelerating credit supply and asset price increases during surges in confidence, while making precipitate falls in asset prices and credit more likely by creating credit channels vulnerable to sudden loss of confidence. These effects were powerfully revealed in 2007-09 in the dislocation of asset-backed commercial paper (ABCP) markets, the failure of an originate-to-distribute model employing structured investment vehicles (SIVs) and conduits, “runs” on MMFs and a sudden reappraisal of the terms on which securities lending and repos were conducted. But whereas banks are subject to a well-developed system of prudential regulation and other safeguards, the shadow banking system is typically subject to less stringent, or no, oversight arrangements.

One part of the report focuses on the Americas, and here\’s a figure I found thought-provoking. The horizontal axis shows \”other financial institutions\” relative to GDP, while the vertical axis shows the banking sector relative to GDP. At the far upper right is the Cayman Islands, with a very large banking sector and a very large \”other financial institutions\” sector relative to GDP. The other two countries with very large banking sectors relative to GDP are Panama and Canada. To put it another way, Panama has a banking sector like the Cayman Islands, but much less of the \”other financial institutions.\”

To me, the interesting comparison is between the U.S. and Canada–both high-income countries with sophisticated financial sectors. Clearly, the U.S. has a larger share of financial activity happening in the \”other financial institutions\” area, while Canada has a larger share of its financial activity happening explicitly in the banking sector. The Canadian economy is of course closely tied to the U.S economy. But the recession in Canada was milder than in the U.S., perhaps in part because Canada\’s financial sector was less exposed to the issues of shadow banking. Given that the banking sector is far more regulated in both countries, this offers a sort of natural experiment or comparison as the economies of the two countries evolve.

A North American Vision

When talking about the U.S. role and prospects in a globalizing economy, it\’s common to read discussions of issues relating to China, Japan, the European Union, and the \”emerging market\” countries. But perhaps when thinking about the U.S. economic and geopolitical future, a more basic building block should be to establish closer ties across North America. A report from the Council on Foreign Relations argues this case in \”North America: Time for a New Focus\” (Independent Task Force Report No. 71, David H. Petraeus and Robert B. Zoellick, Chairs Shannon K. O’Neil, Project Director). Here\’s a taste of the overall tone:

\”[W]e believe that the time is right for deeper integration and cooperation among the three sovereign states of North America. Here is our vision: three democracies with a total population of almost half a billion people; energy self-sufficiency and even energy exports; integrated infrastructure that fosters interconnected and highly competitive agriculture, resource development, manufacturing, services, and technology industries; a shared, skilled labor force that prospers through investment in human capital; a common natural bounty of air, water, lands, biodiversity, and wildlife and migratory species; close security cooperation on regional threats of all kinds; and, over time, closer cooperation as North Americans on economic, political, security, and environmental topics when dealing with the rest of the world, perhaps focusing first on challenges in our own hemisphere. …  

The people of North America are creating a shared culture. It is not a common culture, because citizens of the United States, Canada, and Mexico are proud of their distinctive identities. Yet when viewed from a global perspective, the similarities in interests and outlooks are pulling North Americans together. The foundation exists for North America to foster a new model of interstate relations among neighbors, both developing and developed democracies. Now is the moment for the United States to break free from old foreign policy biases to recognize that a stronger, more dynamic, resilient continental base will increase U.S. power globally. “Made in North America” can be the label of the newest growth market. U.S. foreign policy—whether drawing on hard, soft, or smart power—needs to start with its own neighborhood.\”

The report stresses four main areas for cooperation: energy, cross-border economic ties, security concerns, and what it calls \”community.\” Here are a few words on each.

North America and Energy

North American already has tied together its energy markets in various ways: \”For many years, virtually all of Canada’s energy exports—including oil, gas, and electricity—went to the United States. … The North American countries are also connected through their electricity grids; this is especially true for the United States and Canada. The Eastern Interconnection grid—encompassing parts of Eastern Canada, New England, and New York—and the Western Interconnection grid—stretching from Manitoba through the U.S. Midwest—are mutually dependent and beneficial configurations. Though the U.S.-Canada electricity trade constitutes less than 2 percent of total U.S.
domestic consumption, the interchanges provide resiliency in case of power overloads or natural disasters. U.S.-Mexico interconnections are more limited, though the two countries are linked in southern California and southwestern Texas.\”

Pipeline connections are happening as well. Natural gas pipelines being built from Texas producers into Mexico. The report advocates construction of the Keystone pipeline from Canada into the United states. More broadly, it notes: \”The construction of North America’s energy infrastructure has delayed oil and gas development. …  North Dakota’s Bakken formation, one of the United States’ largest shale formations, continues to flare nearly one-third of its natural gas because of infrastructure
limitations. North America should build new pipelines and upgrade older ones, both within and among the three countries, to address the bottlenecks. Without adequate pipeline capacity, energy companies have increasingly turned to the rails, roads, and waterways.\”

In Mexico, the big news is that oil production has been falling, which in turn has led the Mexican government to start expressing some openness to foreign investment. \”In contrast, Mexican oil production has fallen nearly 25 percent since 2004 to 2.5 million b/d in 2012. The downturn reflects the declining output at Cantarell—once the world’s second-largest oil field—combined with lower-than-expected production levels in newer fields, such as the Chicontepec Basin. The decline can also be traced to underinvestment, inefficiencies, and limits on technology and expertise at the state-owned energy company Petróleos Mexicanos (Pemex). Nevertheless, Mexico’s energy potential is substantial. The EIA and Advanced Resources International (ARI) estimate that the country has the world’s sixth-largest recoverable shale gas resources and significant tight oil potential. Mexico has now made a historic move: its energy reform of December 2013 will encourage private companies to invest in Mexico’s energy sector for the first time since the 1930s.\”

For decades, I\’ve been reading and hearing and writing about the risks and costs of U.S. dependence on faraway sources of energy in the Middle East. In a global economy, energy markets will inevitably be intertwined. But the U.S. energy picture is being fundamentally reshaped with the growth of U.S. oil and gas drilling. If combines with the broader development of North American energy resources, the economics and international power dynamics of energy production could be transformed.

North American Economic Ties

Economic ties across North America don\’t always get lots of attention, but they are large. \”The United States exports nearly five times as much to Mexico and Canada as it does to China and almost twice as much as to the European Union. Mexico and Canada sell more than 75 percent of their exports within North America.\” Here\’s one figure showing the rise in North American trade, and another showing the rise in foreign direct investment in North America.

As the report notes: \”North America also shares a workforce: companies and corporations now make products and provide services in all three countries. With integrated supply chains, employees in one country depend on the performance of those in another; together, they contribute to the quality and competitiveness of final products that are sold regionally or globally.\” I would add that the economic evidence shows the North American Free Trade Agreement had a modest but clearly positive effect on the U.S. economy.

As I see it, there are two underlying point here. First,  the world economy seems to be organizing itself into regions that rely on global supply chains that cross over between higher-income and lower-income countries. For example, in Asia there are supply chains running from Japan and Korea to Thailand and China. In Europe there are supply chains running from western to eastern Europe. North American has been building its own global supply chains between the U.S., Canada, and Mexico. Second, and more broadly, a U.S. economy that wants to prosper from growth happening elsewhere in the world economy needs to start thinking internationally. Thinking internationally in terms of Mexico and Canada is a start in that direction.

Security Issues

National security issues are not my bailiwick, so I don\’t have much to say here. But I\’ll make the commonplace observation that one often hears concerns about terrorist groups who might be able to ship people or materials into the U.S. by way of Canada or Mexico. There are also concerns about how epidemics might spread, or about how to deal with natural disasters. There are obvious advantages in all of these cases to not just thinking in terms of the U.S. border, but to also think about a sort of border around the continent of North America. Deeper sharing of economic and energy relationship could easily be combined with some coordination of security and other measures. I\’m all in favor of stopping terrorist plots before they cross the U.S border.

Community

This is the catch-all word that the report uses for demographic, travel, and immigration issues. Here is some discussion of the demographic issues:

Compared to the rest of the world, North America enjoys an enviable demographic pyramid: the region’s population is relatively young and fertile. North America benefits from larger families—averaging just over two children per family versus 1.6 in Europe and 1.7 in China—with the advantage coming largely from Mexico’s younger population  and slightly higher birth rates. In fact, Mexico is currently in the middle of its “demographic bonus”—the country’s working-age adults outnumber children and the elderly. By comparison, the United States’ and Canada’s demographics are more mature, but their age pyramids have been tempered by their relatively open immigration policies. The region’s future workforce size—a fundamental factor in calculating future economic growth—also compares favorably, with 22 percent of North Americans below thirty years old, compared to 16 percent in both China and Europe. North America has yet to make the most of its demographic advantages.

Here\’s some discussion of cross-border movement and residence in North America:

\”Some thirty-four million Mexicans and Mexican-Americans and more than three million Canadians and Canadian-Americans live in the United States. Nearly one million U.S. expatriates and a large number of Canadians live, at least part of the year, in Mexico. Another one million to two million U.S. citizens and a growing number of Mexicans live in Canada. Shorter stays are numerous. U.S. citizens choose Mexico for their getaways more than any other foreign locale. Mexicans and Canadians return the favor, comprising the largest groups of tourists entering the United States: a combined thirty-four million visitors each year who contribute an estimated $35 billion to the U.S. economy. Workers, students, and shoppers routinely cross the borders; there were 230 million land border crossings in 2012, or roughly 630,000 a day. Indigenous communities also span the border, with residents frequently crossing back and forth.\”

Finally, there\’s the ever-touchy subject of immigration. I\’ve posted my thoughts on immigration policy before, for example here and here, or the five consecutive posts on immigration policy back in February 2012: here, here, here, here, and here. I won\’t rehearse all the arguments again here, but a couple of points seem worth making.

First, Mexico seems to be evolving from a country that mostly experienced emigration. A couple of years ago, net emigration from Mexico essentially stopped.  Now, Mexico is experiencing a certain degree of immigration–often in the form of former emigrants who are returning. The report says:

As a traditional country of emigration, Mexico’s immigration policies are different from those of its northern neighbors. These dynamics are beginning to change. With roughly 1.4 million former emigrants returning to Mexico between 2005 and 2010, the country can utilize the skills and capital that migrants bring home. Mexico also now faces an inflow of people born abroad—immigrants grew from just under five hundred thousand in 2000 to almost one million in 2010. More than three-quarters of these immigrants were born in the United States; the vast majority are children under the age of fifteen.

For this reason, thinking about the potential for emigration from Mexico to the US in terms of the experiences of the 1980s or the 1990s is likely to be misleading.

In addition, , I find myself wondering if thinking about migration from Mexico in a broad North American context might not offer an alternative approach to the U.S. debate over immigration. Imagine a scenario in which it was relatively easy and legal for people from Mexico, Canada and the United States to work in each other\’s countries, but each county could keep its immigration rules with regard to all other countries in the world. Imagine further that people from Mexico, Canada, and the United States could work in each other\’s countries but would not become citizens of the other country (unless they separately applied to do so) and would not be eligible for income support programs in the other country (unless the host country passed specific laws offering such support).

Of course, this in-between approach to cross-border migration wouldn\’t please either those who want to open the borders or those who want to close them. It requires thinking about freedom of movement across North American countries in a different way than we have traditionally done. But the greater freedom of movement might be useful in offering legal status, short of citizenship, for Mexicans who are already working and living in the United States. And if the freedom of movement was limited to North America, then some of the concerns about opening U.S. borders to the world\’s ultra-poor would be ameliorated. I haven\’t thought through the possibilities of such an arrangement in detail, but as part of thinking about what a true North American geopolitical collaboration might look like, it seems worth pondering.

The Excessive Sameness of Politics and Hotelling\’s Main Street

A lot of politicians may sound like they have differentiated view on the campaign trail, but either during the campaign or after being elected, they seem to become homogenized and squishy in their views. Thus, many voters of all political dispositions are continually frustrated because they feel as if all politicians are discomfortingly alike. Maybe you want to vote for someone who isn\’t a hanging-off-the-ideological-cliff extremist, but you would like to vote for someone with clear and definite views–even if you differ with some of those views. To quote a phrase associated with the Goldwater presidential campaign of 1964, but applicable to all sides of the political spectrum, many voters feel that they want \”a choice, not an echo.\”

A famous long-ago economist named Harold Hotelling proposed a classic explanation for this phenomenon back in a paper called \”Stability in Competition,\” published in the March 1929 issue of the Economic Journal (39:153, pp. 41-57).

In one of his illustrations, Hotelling discussed the of two sellers of a product who are thinking about where to locate along Main Street. For simplicity, imagine that the addresses along the street are numbered from 1-100. The working assumption is that customers are spread evenly along Main Street, and the customers will go to whichever store is located closer to them. In this situation, if one store locates at, say, 10 Main Street, the other store will then choose to at 11 Main Street. The first store will then get all the customers from 0-10, and the second store will get all the customers from 11-100. The first store will then relocate to 12 Main Street, to snag the majority of customers, and the two stores will keep leap-frogging each other and relocating until they end up located side by side, right in the middle of Main Street.

As Hotelling pointed out back in 1929, this clustering is not ideal. From the consumers\’s point of view, it would be more useful to have the two stores located at 25 Main Street and 75 Main Street, because then no consumer along the street from 1 to 100 would be more than 25 away from a store. But the dynamics of competition can lead to excessive clustering.

Hotelling argued that this excessive sameness is apparent in many aspects of public competition, including competition between firms introducing new products, and competition between Republicans and Democrats. He wrote:

\”Buyers are confronted everywhere with an excessive sameness. When a new merchant or manufacturer sets up shop he must not produce something exactly like what is already on the market or he will risk a price war … But there is an incentive to make the new product very much like the old, applying some slight change which will seem an improvement to as many buyers as possible without ever going far in this direction. The tremendous standardisation of our furniture, our houses, our clothing, our automobiles and our education are due in part to the economies of large-scale production, in part·to fashion and imitation. But over and above these forces is the effect we have been discussing, the tendency to make only slight deviations in order to have for the new commodity as many buyers of the old as possible, to get, so to speak, between·one\’s competitors and a mass of customers.

So general is this tendency that it appears in the most diverse fields of competitive activity, even quite apart from what is called economic life. In politics it is strikingly exemplified. The competition for votes between the Republican and Democratic parties does not lead to a clear drawing of issues, an adoption of two strongly contrasted positions between which the voter may choose. Instead, each party strives, to make its platform as much like the other\’s as possible. Any radical departure would lose many votes, even though it might lead to stronger commendation of the party by some who would vote for it anyhow. Each candidate \” pussyfoots,\” replies ambiguously to questions, refuses to take a definite stand in any controversy for fear of losing votes. Real differences, if they ever exist, fade gradually with time though the issues may be as important as ever. The Democratic party, once opposed to protective tariffs, moves gradually to a position almost, but not quite, identical with that of the Republicans. It need have no fear of fanatical free-traders, since they will still prefer it to the Republican party, and its advocacy of a continued high tariff will bring it the money and votes of some intermediate groups.

Of course, it\’s not literally true that Republican and Democratic politicians both locate exactly in the middle of the political spectrum. Hotelling was describing a tendency to push to the middle, but in politics, there is also a need to assure your voters that you share their beliefs. Thus, there\’s a saying that American politics is a battle fought between the 40-yard lines. (For those unfamiliar with the line markers on an American football field, the statement suggests that the political battle is fought between the addresses of 40 and 60 on a Hotelling-style Main Street.) Mainstream politicians thus face a continual dynamic where they seek to reassure their more ardent partisans that they are on their side, while shading and tacking as needed to pick up voters in the middle. At an intuitive level, politicians recognize that offering \”a choice, not an echo\” is part of what led Barry Goldwater to a loss of historic magnitude in the 1964 US presidential election.

Political competition that is usually between centrists, whether right-of-center or left-of-center, does have some benefits. Extremists are much less likely to win high office. And even when the other side wins, it\’s reassuring to think that the person who won is at least closer to the center than the true believers at the extreme of that side. But every now and then, many of us yearn for a few more conviction politicians, who say what they mean and mean what they say, who play a greater role in driving the public debate, and who are OK with the possibility that doing so might end up costing them an election.

[For the record, using  the metaphor of a football field to describe the range of political choice seems to have originated with the 1970 best-seller The Real Majority: An Extraordinary Examination of the American Electorate, by Ben Wattenberg and Richard M. Scammon. But they used the image to discuss how political conflict might sometimes be between those near the middle and sometimes between those at with more extreme positions. The claim that American politics usually  happens between the 40 yard-lines is one of those statements that seems to have evolved afterwards, without a clear single author.]

Is Better Communication Longer and More Complex?

Twenty years ago, when the Federal Reserve Open Market Committee wanted to change interest rates, it didn\’t make any announcement. It just took action, and market participants observe those actions. Mark Wynne of the Federal Reserve Bank of Dallas explains in “A Short History of FOMC Communication”:

The first time the FOMC issued a statement immediately after a meeting explaining what action had been decided was on Feb. 4, 1994. That statement simply noted that the committee decided to “increase slightly the degree of pressure on reserve positions” and that this was “expected to be associated with a small increase in short-term money market interest rates.” By way of explanation for why the committee was announcing its decision, the statement said that this was being done “to avoid any misunderstanding of the committee’s purposes, given the fact that this is the first firming of
reserve market conditions by the committee since early 1989.” In February 1995, the committee decided that all changes in the stance of monetary policy would be announced after the meeting.

But over the years, these announcements of Fed policy have become longer and more complex. Rubén Hernández-Murillo and Hannah Shell of the Federal Reserve Bank of Cleveland have created a vivid figure to show the change in \”The Rising Complexity of the FOMC Statement.\” The colors of the circles show who was leading the Fed at the time: blue for Greenspan, red for Bernanke, and green for Yellen. The area of the circle shows the number of words in the statement: clearly, the statements have been getting wordier over time. And on the vertical axis, the FOMC statements were run through a standard diagnostic tool for determining their \”reading grade level.\” In short, the statement back in the mid-1990s were often pitched at about a 12th grade level. But over time, and especially after the financial crisis hit, the FOMC statements ratcheted up to a \”19th grade\” level, which is to say that they were pitched at readers with post-college graduate study.

This trend raises a question last spotted among the \”advice to the lovelorn\” columnists: Is communication better if it is longer and more complex? As an editor, I confess that I\’m suspicious of length and complexity. A wise economist friend used to point out to me that in academia, specialized terminology always serves two purposes: it streamlines and simplifies communication among specialists, and it shuts out nonspecialists. Of course, all academics like to believe that we are only using specialized terminology for the loftiest of intellectual purposes, not because we are as much of an in-group as an set of gossiping teenagers, with our own slang devised to define membership in the group and to separate ourselves from others.

But even my cynical side remembers a fundamental rule of exposition often attributed to Albert Einstein is that \”Everything should be made as simple as possible, but not simpler.\”

It make sense that as the Federal Reserve statements became longer and more complex as the Fed began to specify a numerical range for its interest rate policies in the late 1990s; and then began to describe how it saw future risks in 1999; and then began to specify how quickly it expected to adjust future monetary policy in 2004; and then began its \”quantitative easing\” policy in 2008.

In December 2012, as Wynne points out, the Fed altered its communication substantially by saying that “this exceptionally low range for the federal funds rate will be appropriate at least as long as the unemployment rate remains above 6½ percent, inflation between one and two years ahead is projected to be no more than a half percentage point above the committee’s 2 percent longer-run goal and longer-term inflation expectations continue to be well anchored.” In other words, the Fed for the first time had announced that it would keep interest rates at a certain level until a certain economic statistic–the unemployment rate–had moved in a certain way. But then when the unemployment rate fell beneath 6.5% in April 2014, this earlier statement had led to expectations that the Fed would then start raising interest rates, which as it turned out, the Fed wasn\’t yet quite ready to do.

How the Federal Reserve and other major central banks carry out their policies has been fundamentally transformed during the last six years. Explaining these changes is important. But if the explanations are pitched at a level that only makes sense to PhD economists, they aren\’t much help. And as the personal advice columnists remind us, if someone is talking and talking but not giving you a straight answer that you can understand, you have some reason to mistrust whether they know their own mind–and even whether they are really trying to tell you the truth. I\’ll give the last word here to Hernández-Murillo and Shell:

As the Fed returns to using conventional monetary policy tools, it is likely that the reading levels of its statements will decline. However, if the Fed continues to use unconventional instruments for a considerable period, it may need to consider how to explain its policy actions in simpler terms to avoid volatility in financial markets. 

Should Voting be Compulsory?

Just to put my cards face up on the table right here at the start, I\’m not in favor of compulsory voting. But I think the case for doing so is stronger than commonly recognized. Let me lay out the arguments as I see them: low turnover, what the penalties look like in some other countries for not voting, the free speech/constitutional issues, and whether any resulting differences in outcomes would be desirable.

[This post was originally published on Election Day, November 6, 2012.]

The starting point for making it compulsory to vote begins with the (arguable) notion that democracy would be better-served if participation in elections was higher. Here\’s a figure from a post of mine a couple of months ago on \”Voter Turnout Since 1964.\” With some variation across age groups, voter turnout in presidential elections has been sagging over the last few decades.


Some nations have responded to concerns over low voter turnout by passing laws that make it a requirement to vote. Here\’s a list of countries with such laws, and the penalties that they impose for not voting, taken from a June 2006 report from Britain\’s Electoral Commission. The penalties are categorized from \”Very Strict\” to \”None.\” But honestly, even the \”Very Strict\” is not especially onerous.



In talking with people on this subject, I\’ve found that one immediate response is that that compulsory voting must be a violation of freedom of free speech in some way. I have some of this reaction myself. But while one may reasonably oppose the idea of compulsory voting, the case that it violates a specific law or constitutional right is difficult to make. Indeed, the original 1777 constitution of the state of Georgia specifically called for a potential penalty of five pounds for not voting–although it also allowed an exception for those with a good explanation. If the U.S. government can require you to pay money for taxes, or compel you to serve on jury duty, or institute a military draft, it probably has the power to require that you show up and vote. Of course, a compulsory voting law would almost certainly include provisions for conscientious objectors to voting, and you would be permitted to turn in a totally blank ballot if you wish. The penalties for not voting would be an inconvenience, but far from draconian.

For a review of the various legal and constitutional ins and outs of compulsory voting, along with some of the practical arguments, I recommend this anonymous 2007 note in the Harvard Law Review, called \”The Case for Compulsory Voting.\”

The author points out (footnotes omitted): \”Approximately twenty-four nations have some kind of compulsory voting law, representing 17% of the world’s democratic nations. The effect of compulsory voting laws on voter turnout is substantial. Multivariate statistical analyses have shown that compulsory voting laws raise voter turnout by seven to sixteen percentage points.\”

The anonymous author also offers what seem to me ultimately the two strongest arguments for compulsory voting. The first argument is that a larger turnout will (arguably) provide a more accurate representation of what the public wants, and in that sense will strengthen the bond between the electorate and its elected representatives. The second and more subtle argument is that compulsory voting would mean that political parties could focus much less on voter turnout. Less money and effort could go into turning out the vote, and more into persuasion. Those who now vote almost certainly have stronger partisan feelings, on average, than those who don\’t vote. So politicians aim their advertisements and strategies at that more partisan group. Many negative campaign ads attempt to reduce turnout for a candidate: if turnout was high, the usefulness of such negative ads could be diminished. A broader spectrum of voters would push candidates to offer a broader spectrum of messages to appeal to those voters, and groups that now have low turnout would find themselves equally courted by politicians.

The question becomes whether these potential benefits to the democracy as a whole are worth the imposition of compulsory voting. The anonymous writer in the Harvard Law Review offers what is surely meant to be an attention-grabbing and paradoxical-sounding conclusion: \”Although there are several legal obstacles to compulsory voting, none of them appear to be substantial enough to bar compulsory voting laws. … The biggest obstacle to compulsory voting is the political reality that compulsory voting seems incompatible with many Americans’ notions of individual liberty. As with many other civic duties, however, voting is too important to be left to personal choice.\”

How might one respond to these arguments? Perhaps the most obvious answer is that if one looks at the countries that have compulsory voting–say, Brazil, Australia, Peru, Thailand–it\’s not obvious that their politics are characterized by greater appeals to the nonpartisan middle, or that the bond between the population and its elected representatives is especially strong.

For a more detailed deconstruction , I recommend a 2009 essay by Annabelle Lever in Public Reason magazine, \”Is Compulsory Voting Justified?\” Basically, her argument comes down to a belief that the potential gains from compulsory voting are unproven and unsupported by evidence in countries that have tried it, while the lost freedom from compulsory voting would be definite and real.

In Lever\’s view, the evidence that exists doesn\’t show that political parties start competing for the middle in a different way, nor that outcomes are different. For example, northern European social democratic countries like Sweden don\’t have compulsory voting, and do have declining voter turnout.
f people are disinterested or disillusions and don\’t want to vote for the existing candidates, it\’s not clear that threatening them with a criminal offense for not voting will build connections from the population to elected representatives. If political parties don\’t need to focus on turnout, they will immediately turn to other ways of identifying swing groups and wedge issues. The penalties for not voting may not look large in some broad sense, but be clear: when we enter the realm of compulsory voting, we are talking about criminal behavior. will need to decide how large the fines or other penalties will be, and what happens to those (and there will be some!) who refuse to pay. If not voting is a crime, we will be making a lot of people into criminals–maybe guilty of only a minor crime, but still recorded in our information-technology society as breaking the law. It is by no means clear that having aright to vote should be reinterpreted as having a legal duty to vote: there are many rights that one may choose to exercise, or not, as one prefers. In a free society, the right to be left alone has some value, too. Lever concludes:

\”I have argued that the case for compulsory voting is unproven. It is unproven because the claim that compulsion will have beneficial results rests on speculation about the way that nonvoters will vote if they are forced to vote, and there is considerable, and justified, controversy on this matter. Nor is it clear that compulsory voting is well-suited to combating those forms of low and unequal turnout that are, genuinely, troubling. On the contrary, it may make them worse by distracting politicians and voters from the task of combating persistent, damaging, and pervasive forms of unfreedom and inequality in our societies.

\”Moreover, I have argued, the idea that compulsory voting violates no significant rights or liberties is mistaken and is at odds with democratic ideas about the proper distribution of power and responsibility in a society. It is also at odds with concern for the politically inexperienced and alienated, which itself motivates the case for compulsion. Rights to abstain, to withhold assent, to refrain from making a statement, or from participating, may not be very glamorous, but can be nonetheless important for that. They are necessary to protect people from paternalist and authoritarian government, and from efforts to enlist them in the service of ideals that they do not share. Rights of non-participation, no less than rights of anonymous participation, enable the weak, timid and unpopular to protest in ways that feel safe and that are consistent with their sense of duty, as well as self-interest. … People must, therefore, have rights to limit their participation in politics and, at the limit, to abstain, not simply because such rights can be crucial to prevent coercion by neighbours, family, employers or the state, but because they are necessary for people to decide what they are entitled to do, what they have a duty to do, and how best to act on their respective duties and rights.\”

 I don\’t know of any recent polls on how Americans feel about compulsory voting, but a 2004 poll by ABC News found 72% opposed–a slightly higher percentage than a poll taken 40 years earlier on the same subject. These kinds of results from nationally representative polls pose an additional level of irony. If Americans as a group are strongly opposed to laws that would require compulsory voting, it seems problematic to glide around this opposition into an argument that, really, although they don\’t know it yet, they would feel better off with compulsory voting.

In a 2004 essay on compulsory voting (in this volume), Maria Gratschew points out that a number of countries in western Europe that used to have compulsory voting have have moved away from it in recent decades: Austria, Italy, Greece, and Netherlands. In discussing the decision by Netherlands to drop its compulsory voting laws in 1967, Gratschew writes: \”A number of theoretical as well as practical arguments were put forward by the committee: for example, the right to vote is each citizen\’s individual right which he or she should be free to exercise or not; it is difficult to enforce sanctions against non-voters effectively; and party politics might be livelier if the parties had to attract the voters\’ attention, so that voter turnout would therefore reflect actual participation and interest in politics.\”

Compulsory voting is one of those intriguing roads that looks better when not actually traveled.

Stanley Fischer: How Far on Financial Reform?

In the aftermath of the Great Recession, it seemed blindingly clear that the financial sector needed reform. But what kind of reform was needed, and what progress has been made? Stanley Fischer takes on these questions in the 2014 Martin Feldstein Lecture, \”Financial Sector Reform: How Far Are We?\” published in the NBER Reporter (2014, vol. 3).

Everyone in academic economics knows who Fischer is, but for those outside, he\’s a man with an
extraordinary resume. He is currently vice-chairman of the Federal Reserve. Before that, he was governor of the Bank of Israel for eight years. Before that, at various times, he was chief economist at the World Bank, first deputy managing director of the International Monetary Fund, vice chairman at Citibank, and a very prominent economics professor at MIT. His Fischer\’s quick summary of the nine main items on the financial sector reform agenda:

Several financial sector reform programs were prepared within a few months after the Lehman Brothers failure. These programs were supported by national policymakers, including the community of bank supervisors. The programs – national and international – covered some or all of the following nine areas: (1) to strengthen the stability and robustness of financial firms, \”with particular emphasis on standards for governance, risk management, capital and liquidity\” (2) to strengthen the quality and effectiveness of prudential regulation and supervision; (3) to build the capacity for undertaking effective macroprudential regulation and supervision; (4) to develop suitable resolution regimes for financial institutions; (5) to strengthen the infrastructure of financial markets, including markets for derivative transactions; (6) to improve compensation practices in financial institutions; (7) to strengthen international coordination of regulation and supervision, particularly with regard to the regulation and resolution of global systemically important financial institutions, later known as G-SIFIs; (8) to find appropriate ways of dealing with the shadow banking system; and (9) to improve the performance of credit rating agencies, which were deeply involved in the collapse of markets for collateralized and securitized lending instruments, especially those based on mortgage finance.

In his talk, he focuses on three of these items: \”Rather than seek to give a scorecard on progress on all the aspects of the reform programs suggested from 2007 to 2009, I want to focus on three topics of particular salience mentioned earlier: capital and liquidity, macroprudential supervision, and too big to fail.\”

Capital ratios have been substantially increased, which means that banks will in the future have a bigger buffer the next time their loan portfolio turns unexpectedly bad. Here\’s Fischer:

The bottom line to date: The capital ratios of the 25 largest banks in the United States have risen by as much as 50 percent since the beginning of 2005 to the start of this year, depending on which regulatory ratio you look at. For example, the tier 1 common equity ratio has gone up from 7 percent to 11 percent for these institutions. The increase in the ratios understates the increase in capital because it does not adjust for tougher risk weights in the denominator. In addition, the buffers of HQLAs [high-quality liquid assets] held by the largest banking firms have more than doubled since the end of 2007, and their reliance on short-term wholesale funds has fallen considerably. At the same time, the introduction of macroeconomic supervisory stress tests in the United States has added a forward-looking approach to assessing capital adequacy, as firms are required to hold a capital buffer sufficient to withstand a several-year period of severe economic and financial stress. The stress tests are a very important addition to the toolkit of supervisors, one that is likely to add significantly to the quality of financial sector supervision.

The proposed changes in macroprudential regulation (a concept previouslyintroduced and discussed here and here on this blog) are at best incomplete so far.

As is well known, the United Kingdom has reformed financial sector regulation and supervision by setting up a Financial Policy Committee (FPC), located in the Bank of England; the major reforms in the United States were introduced through the Dodd-Frank Act, which set up a coordinating committee among the major regulators, the Financial Stability Oversight Council (FSOC). Don Kohn … sets out the following requirements for successful macroprudential supervision: to be able to identify risks to financial stability, to be willing and able to act on these risks in a timely fashion, to be able to interact productively with the microprudential and monetary policy authorities, and to weigh the costs and benefits of proposed actions appropriately. Kohn\’s cautiously stated bottom line is that the FPC is well structured to meet these requirements, and that the FSOC is not. In particular, the FPC has the legal power to impose policy changes on regulators, and the FSOC does not, for it is mostly a coordinating body.

What about the efforts to address the \”too big to fail\” problem, where large financial firms that have taken big risks and earlier earned large profits then need to be bailed out by the government, because they are so large and so interconnected to the rest of the economy that their failure could lead to even larger public costs?

One can regard the entire regulatory reform program, which aims to strengthen the resilience of banks and the banking system to shocks, as dealing with the TBTF [too big to fail] problem by reducing the probability that any bank will get into trouble. There are, however, some aspects of the financial reform program that deal specifically with large banks. The most important such measure is the work on resolution mechanisms for SIFIs, including the very difficult case of G-SIFIs [global systemically important financial institutions]. In the United States, the Dodd-Frank Act has provided the FDIC with the Orderly Liquidation Authority (OLA) – a regime to conduct an orderly resolution of a financial firm if the bankruptcy of the firm would threaten financial stability. …

Work on the use of the resolution mechanisms set out in the Dodd-Frank Act, based on the principle of a single point of entry, holds out the promise of making it possible to resolve banks in difficulty at no direct cost to the taxpayer — and in any event at a lower cost than was hitherto possible. However, work in this area is less advanced than the work on raising capital and liquidity ratios. … [P]rogress in agreeing on the resolution of G-SIFIs and some other aspects of international coordination has been slow. … 

What about simply breaking up the largest financial institutions? Well, there is no \”simply\” in this area. … Would a financial system that consisted of a large number of medium-sized and small firms be more stable and more efficient than one with a smaller number of very large firms? … That is not clear, for Lehman Brothers, although a large financial institution, was not one of the giants — except that it was connected with a very large number of other banks and financial institutions. Similarly, the savings and loan crisis of the 1980s and 1990s was not a TBTF crisis but rather a failure involving many small firms that were behaving unwisely, and in some cases illegally. This case is consistent with the phrase, \”too many to fail.\” Financial panics can be caused by herding and by contagion, as well as by big banks getting into trouble. In short, actively breaking up the largest banks would be a very complex task, with uncertain payoff.

Fischer\’s overall tone seems to me cautiously positive about how financial reform has proceeded. My own sense is more negative. By Fischer\’s accounting, capital ratios have clearly improved, but macroprudential regulation and avoiding too big to fail remain works in progress.

As I have pointed out  before on this website, the Dodd-Frank Act of 2010  required the passage of 398 rules, and a little more than half of those rules have been finalized in the last few years. But to be clear, a \”finalized\” rule doesn\’t actually mean that businesses have figured out how to comply with the rule, and the rule itself may still be under negotiation. In certain areas from Fischer\’s list of rule, like #8 concerning the risks of shadow banking (for discussion, see here and here) and #9 concerning credit rating agencies (for discussion, see here and here), Dodd-Frank required almost no changes at all. I understand that financial reform is like changing the course of an enormous ocean liner, not like steering a bicycle. But we are now six years past the worst of the crisis in 2008, and much remains to be done.

Oikonomia, Revisited

My knowledge of ancient Greek is not significantly different from zero, but every student of economics at some point runs into oikonomia, the root word from which economy and economics were later derived. For example, in describing the etymology of \”economy,\” the Oxford English Dictionary writes that it derives from \”ancient Greek οἰκονομικός practised in the management of a household or family, thrifty, frugal, economical.\” 

To the modern ear, the idea of economics as having roots in \”household management\” makes some intuitive sense: after all, a number of modern economic models are built on the idea of a household that seeks to maximize its utility, subject to constraints of income and/or time. However, Dotan Leshem suggests that this easy parallel between what the Greeks meant by household management and modern microeconomics is rather misleading. He provides some additional context and insight for the term in \”Oikonomia Redefined,\” which appeared in the Journal of the History of Economic Thought, March 2013 (35: 1, pp 43 – 61). The journal is not freely available on-line, but many readers will have access through a library subscription. Leshem describes a work written by Xenophon, roughly dated to about 360 BC:

The first to propose a definition of oikonomia—the management and dispensation of a household—was Xenophon. He did so in the concluding chapter of the theoretical dialogue of the Oikonomikos (the Oikonomikos is composed of two dialogues: the first is theoretical, while the second focuses on the art of oikonomia. … Xenophon’s definition is composed of four building blocks, or sub-definitions: i) oikonomia as a branch of theoretical knowledge; ii) the oikos as the totality of one’s property; iii) property as that which is useful for life; and iv) oikonomia as the knowledge by which men increase that which is useful for life. Clarifying the meanings of the sub-definitions by a close reading of ancient Greek texts will allow me to argue that the ancient Greek philosophers understood oikonomia as encompassing any activity in which man, when faced with nature’s abundance or excess, acquires a prudent disposition that is translated into practical and theoretical knowledge, in order to comply with his needs and generate surplus. The surplus generated allows man to practice extra economic activities such as politics and philosophy. Excess in the definition proposed is an attribute of nature, which is assumed to be able to meet everyone’s needs and beyond, if economized prudently. Surplus, on the other hand, is the product of people’s prudent economizing of nature’s excess that is not used for securing existence.

To get a grip on this earlier notion, it\’s useful to remember that the idea of a \”household\” was an expansive one in the times of ancient Greece: indeed, it included all property of the well-to-do in a way that also encompassed what we would now think of as production and firms. This broad idea of the household was called the oikos, which was then viewed as a part of the polis, or public sphere.

The fundamental economic challenge, according to the ancient Greeks, was to strike a balance. On one side, too little emphasis on economics meant an inability to provide the surplus that would enable some people to live the good life of politics and philosophy. On the other side, too great an emphasis on economics could lead to a pursuit of luxurious living, an outcome which also would interfere with pursuit of the good life. Here\’s how Leshem describes Xenophon on this subject:

In the same vein, Xenophon, who explored the nature of wealth in the first two chapters of the Oikonomikos, is also preoccupied with setting the right limits to engagement in economics without directing surplus either into luxury or back into the economy. He does so by presenting two obstacles to man’s accumulation of wealth. Both are the outcome of self-enslavement to excessive desires instead of need satisfaction. The first takes place when someone is immersed in non-economic activities that prevent him from ‘‘engaging in useful occupations,’’ meaning that he is wholly taken up with activities that prevent him from economizing his life. Such total avoidance of economizing, in its meaning of utilizing usable things, is presented as a sort of bondage. Put differently, evading a prudent disposition causes the loss of the conditions enabling a good and happy life. The second obstacle to wealth accumulation arises when one immerses in the economic sphere, having enslaved oneself to desires:

[Xenophon writes:] \”And these too, are slaves, and they are ruled by extremely harsh masters. Some are ruled by gluttony, some by fornication, some by drunkenness, and some by foolish and expensive ambitions which rule cruelly over any men men they get into their power, as long as they see that they are in their prime and able to work . . . mistresses such as I have described perpetually attack the bodies and souls and households all the time that they dominate them.\”

This second kind of self-enslavement is not to be found in avoidance of economic
activities and the lack of prudent disposition when using things. Instead, it is to be
found in the failure to set boundaries to the economic sphere and, as a consequence, to fully immerse oneself in it. Such full immersion is presented as a lack of ability
to generate extra-economic surplus. It is here where the other side of oikonomia’s
definition as prudent conduct makes its appearance: besides utilizing the thing
acquired for the sake of existence, its prudent use generates extra-economic surplus.

This modern notion of economics is rooted in the idea that we all face scarcity of time, money, and energy, and thus need to make decisions involving tradeoffs. As Leshem points out, this ancient Greek notion is actually more rooted in an idea that we face an issue of abundance: how to bring that abundance to fruition, and how to prevent ourselves from giving into luxurious living. Because the purpose of economic life is to create this extra-economic surplus, the goal can be achieved either by increasing production, or by keeping the level of consumption low enough that a comfortable surplus will persist. Leshem writes;

 \”As can be seen, the problem arising when supplying the needs of the oikos is not how to deal with scarce means. It is, rather, how to set a limit to engaging in economic matters altogether, since nature possess excessive means that can supply all of people’s natural needs, as well as their unnatural desires. On the other hand, if economized prudently, this excess can be used to generate surplus. It can supply the needs of all the inhabitants of one’s oikos or polis, and free some of its members from engaging in economic matters to experience the good life, which is extra-economic.\”

Pulling these various elements together, Leshem sums up Xenophon\’s definition in this way (Greek words and page numbers omitted):

Xenophon’s definition of the oikonomia, as ‘‘a branch of theoretical knowledge . . . by which men can increase household . . . which is useful for life . . .’’  can be reformulated into: oikonomia is the prudent management of the excess found in man and nature in order to allow the practice of a happy life with friends, in politics, and in philosophy. We can see that the two definitions are interchangeable; oikonomia is the management of the oikos. The oikos itself equals wealth, in turn to be defined as everything useful for life. …  The definition of wealth is compatible with the definition of the oikonomia as the prudent management of needs satisfaction in order to generate surplus leisure time (which was perceived by Aristotle as a precondition for the attainment of happiness).

Up to this point, the notion of economics and the \”good life\” of politics or philosophy seem like a conception limited to the upper sliver of property owners. Indeed, there is an interpretation of Greek economic philosophy often associated with Aristotle, which still has echoes today, which holds that economic life is by definition without virtue, and true human virtue comes only from noneconomic areas of life like public life and philosophy. Taken to an extreme, this view would hold that those who put too much time and energy into work cannot be virtuous. An alternative modern view, often associated with John Locke, holds that economic work is a transformative interaction with the natural world through which humans create their own autonomy and virtue. (Here is an essay of my own thoughts on interactions between economics and virtue.) 

Leshem argues that while scholars of ancient Greek economic thought used to see a disjunction between economic life and a virtuous life, with the virtues of politics and philosophy largely limited to those in high positions, the more recent literature has take an broader view. He writes:

\”In addition, contemporary literature persuasively presents the oikos as a diversified domain in which there exist all kinds of human relations besides despotic ones. They stress the friendship between husband and wife, it being for the sake of happiness and not just as a means to support the polis, the role of education of children within the household, the different kinds of slaves, the use of other means of communication beside violence, and the household’s existence in and for itself. In this depiction, not only the master, but many participants in the household, can demonstrate virtue, doing so within its bounds.\”

Overall, the concept of economic life among the ancient Greeks was not built on individuals and firms interacting in markets for goods, labor and capital. It is not built on a fundamental issue of facing scarcity and decisions that involve tradeoffs. Instead, the building block is the role of the household or oikos as a building block for society as a whole, and for enabling people to live a virtuous life outside the economic realm. Here is how Leshem sums up the difference between the economic theory of the ancient Greeks and modern economic theory:

[T]he differences and the resemblance between contemporary and ancient Greek economic theory are rather marked. Both define the economic sphere by the disposition people demonstrate (in the former—of prudence; in the latter—of rationality), which is translated into the theoretical and practical knowledge people demonstrate in economic activity. Moreover, both say that everything that people utilize in order to satisfy their needs/desires and to generate surplus is part of the economic domain, and not (just) material wealth. But, while in ancient economic theory, acquiring this disposition was seen as the expression of an ethical choice, in contemporary theory, the individual’s doing so is taken as a given, so that people’s rational disposition can be inferred from their revealed preferences. 

The two economic theories embody distinct ontologies: while ancient economic theory held that humans face abundance and excessive means in the economic domain, contemporary economists hold that it is only scarce means that are available there. As for the designation of the surplus generated, in the ancient theory it is a surplus of leisure time that allows the master/citizen to participate in politics and engage in philosophy, while in the contemporary theory … it is to be turned back to the economic domain, as a source for growth, or, as pointed out by critics of contemporary consumerism, into luxurious consumption.

The Economics of Water in the American West

Fresh water doesn\’t get used up in a global sense: that is, the quantity of fresh water on planet Earth doesn\’t change. But the way in which the world\’s fresh water is naturally distributed–by evaporation, precipitation, groundwater, lakes, and rivers and streams–doesn\’t always match where people want that water to be. The man-made systems of water distribution like dams, reservoirs, pipelines, and irrigation systems can alter the natural distribution of water to some extent. But the American West is experiencing a combination of drought that reduces the natural supply of water and rising population that wants more water. Even with drought, population pressures, and environmental demands for fresh water, there is actually plenty of water in the American Southwest–at least, if the incentives are put in place for some changes to be implement by urban households, farmers, water providers, and legislators.

For an analysis of the issues and options, a useful starting point is a set of three papers published by the Hamilton Project at the Brookings Institution:

Here\’s a figure from Culp, Glennon, and Libecap showing the U.S. drought situation, concentrated in the southwestern United States: 
These southwestern states have also experienced dramatic population growth in recent decade. Of the regions of the United States, these are the states with the highest population growth and the lowest annual rainfall even in average times. Here\’s a figure from Kearney, Harris, Hershbein, Jácome, and Nantz:
Here\’s my master plan for how to address the water shortfall, drawing on discussions in the various papers. 
1) Reduce the incentives for outdoor watering by urban households in dry states. 
If  you had to guess, would you think that urban households in dry states of the American southwest use more or less water than other states? In general, these households tend to be heavier users of water. Here\’s a figure from Kearney et al., who report (citations omitted):

Outdoor watering is the main factor driving the higher use of domestic water per capita in drier states in the West. Whereas residents in wetter states in the East can often rely on rainwater for their landscaping, the inhabitants of Western states must rely on sprinklers. As an example, Utah’s high rate of domestic water use per capita is driven by the fact that its lawns and gardens require more watering due to the state’s dry climate. Similarly, half of California’s residential water is used solely for outdoor purposes; coastal regions in that state use less water per capita than inland regions, largely because of less landscape watering . . .

There are a variety of ways to reduce outdoor use of water: specific rules like banning outside watering, or limiting it to certain times of day (to reduce evaporation); the use of drip irrigation and other water-saving technologies; and so on. For economists, an obvious complement to these sorts of steps is to charge people for water in a way where the first \”block\” of water that is used has a relatively low price, but then additional \”blocks\” have higher and higher prices. 
Here\’s a figure showing average monthly water bills across cities. Los Angeles and San Diego do rank among the cities with higher bills, although the absolute difference is not enormous. But again, the point here is not the average bill, but rather that those who use large amounts of water because they want a green lawn and a washed-down driveway should face some incentives to alter that behavior. 
2) Upgrade the water delivery infrastructure. 

One hears a lot of talk about the case for additional infrastructure spending, but much of the focus seems to be on fixing roads and bridges. I\’d like to hear some additional emphasis on how to fix up the water infrastructure system. As Ajami, Thompson, and Victor note: 

\”Water infrastructure, by some measures the oldest and most fragile part of the country’s built environment, has decayed. … Water infrastructure—including dams, reservoirs, aqueducts, and urban distribution pipes—is aging: almost 40 percent of the pipes used in the nation’s water distribution systems are forty years old or older, and some key infrastructure is a century old. On average, about 16 percent of the nation’s piped water is lost due to leaks and system inefficiencies, wasting about 7 billion gallons of clean and treated water every day …. Metering inaccuracies and unauthorized consumption also leads to revenue loss. Overall, about 30 percent of the water in the United States falls under the category of nonrevenue water, meaning water that has been extracted, treated, and distributed, but that has never generated any revenue because it has been lost to leaks, metering inaccuracies, or the like … \” 

3) Let farmers sell some of their water to urban areas. 
For historical reasons, a very large proportion of the water in many western states, but especially California, goes to agricultural uses. Some of these uses combine relatively high market value and relatively low use of water, like many fruits (including wine grapes), vegetables, and nuts. But the use of water for other crops is more troublesome.  Culp, Glennon, and Libecap go into these issues in some detail. As one vivid example, they write: \”In 2013, Southern California farmers used more than 100 billion gallons of Colorado River water to grow alfalfa (a very water-intensive crop) that was shipped abroad to support rapidly growing dairy industries, even as the rest of the state struggled through the worst drought in recorded history …\” 
There are a substantial number of legal barriers to the idea of farmers trading some water to urban areas, but the possibilities are quite striking. Here\’s a figure showing that 80% of California\’s water use goes to agriculture, with a substantial share of that going to lower-value field crops like alfalfa, rice,  and cotton. In agricultural areas, as in urban ones, there is often considerable scope for conserving water in various ways,  like targeting the use of irrigation more carefully, making sure that irrigation ditches don\’t leak while carrying water, and the like. 
Imagine for the sake of argument that it was possible with a comprehensive effort that combined shifting to different crops and water conservation efforts to reduce agricultural water use in California by one-eighth: that is, instead of using 80% of the available water, agriculture would get by with using 70% of the available water. The amount available for urban and/or environmental uses would then rise by half, from 20% to 30% of the available water. 
One approach they describe, implemented starting in 2002 in Santa Fe, New Mexico, required that any new urban construction had to find a way to offset water that would be used in that construction. 

As an example, developers could obtain a permit to build if they retrofitted existing homes with low-flow toilets. Residents of these homes welcomed the chance to get free toilets, and Santa Fe plumbers jumped at the opportunity for new business. Within a couple of years plumbers had swapped out most of the city’s old toilets with new high-efficiency ones. Water that residents would have flushed away now supplies new homes.  … In short order, a market emerged as developers began to buy water rights from farmers. Developers deposited the water rights in a city-operated water bank; when the development became shovel-ready, the developer withdrew the water rights for the project. If the project stalled, the developer could sell the rights to another developer whose project was farther along. Santa Fe also enacted an aggressive water conservation program and adopted water rates that rise on a per unit basis as households consume additional blocks of water. Thanks to the innovative water-marketing measures, the conservation program, and tiered water rates, water use per person in Santa Fe has dropped 42 percent since 1995 … 

4) Set up groundwater banks. 
Historically, most western states have allowed any property owner to drill a well and use groundwater without limit. But the groundwater reserves are slow to recharge, and with the drought and population pressures, they are under severe stress. Culp, Glennon, and Libecap explain (citations omitted):

Groundwater has been the saving grace for many parts of the water-starved West. Following the advent of high-lift turbine pump technology in the 1930s, many regions had access to vast reserves of water in underground aquifers that they have tapped to supply water when surface water supplies were inadequate. A recent study looked at data on freshwater reserves above ground and below ground across the Southwest from 2004 to 2013. It found that freshwater reserves had declined by 53 million acre-feet during this time—a volume equivalent to nearly twice the capacity of Lake Mead! The study also found that 75 percent of the decline came from groundwater sources, rather than from the better-publicized declines in surface reservoirs, such as Lake Mead and Lake Powell. Much of this decline occurred because some Western states, including California, have historically failed to regulate, or do not adequately regulate, groundwater withdrawals. As a result, groundwater aquifers are effectively being mined to provide water for day-to-day use. In response to the ongoing drought, California farmers continue to drill new wells at an alarming rate, lowering water tables to unprecedented depths …In the San Joaquin Valley of California, excessive groundwater pumping caused the water table to plummet and the surface of the earth to subside more than twenty-five feet between 1925 and 1977 …\” 

Arizona has already been taking steps toward groundwater protection, both by limiting what can be taken out and by providing incentives to save water in the form of recharging groundwater (which avoids the problem of evaporation). 

Although not yet developed into a formal exchange, Arizona has been at the cutting edge in developing groundwater recharge and recovery projects and a supporting statutory framework to help enhance the reliability of water supplies. Arizona allows municipal users, industrial users, and various private parties to store water in exchange for credits that they can transfer to other users. Because water stored underground in aquifers is not subject to evaporation, groundwater that is deliberately created through recharge activity can be stored and recovered later. This recharge and recovery approach is facilitated by Arizona laws that restrict the use of groundwater in several of the state’s most important groundwater basins; these restrictions prevent open access to the resource. Restrictions on open access, combined with statutory and regulatory provisions that allow for the creation and recovery of credits, created the essential conditions for trade in stored groundwater. As a result, numerous transactions have occurred between various municipal interests, water providers, and private parties. 

California passed legislation last month to regulate groundwater pumping for the first time. 
5) More research and development on water-saving technologies. 
As Ajami, Thompson, and Victor discuss at some length, there is relatively little innovative activity in water conservation and purification, as opposed to, say, energy conservation and new sources of energy. They argue that part of the reason is that energy-providing companies compete against each other, while most water companies are sleepy publicly-run local monopolies. Potential entrepreneurs have the ability to look at a lot of ways of producing and using energy, confident that if they come up with something useful, their invention will find a ready market. But entrepreneurs looking at various methods of water conservation will often find that their ideas apply only locally, or are hard to patent, or are hard to sell to water companies and users. Here\’s their figure comparing spending for energy and water innovation at a global and U.S. level. 
Drought is a natural problem. But the factors that determine how water is available get used represent an economic problem of the incentives and constraints that determine the allocation of a scarce resource.  In the American West, the institutional problems of water allocation seem to me even more severe than the natural problem of drought. 
* Full disclosure: I did comments and editing on the paper by Culp, Glennon, and Libecap, and was paid an honorarium for doing so. 

Political Polarization and Confirmation Bias

Election Day is coming up a week from tomorrow, on Tuesday. Are you voting for your beliefs about the beliefs, character, and skills of the candidates? Or are you voting the party line, one more time? Here\’s an article I wrote for the Star Tribune newspaper in my home state of Minnesota. I have added weblinks below to several of the studies mentioned. 

\”It\’s my belief and I\’m sticking to it:

In such a polarized atmosphere, you may want to examine pre-existing biases.\”
By Timothy Taylor

Part of the reason American voters have become more polarized in recent decades is that both sides feel better-informed.

The share of Democrats who had “unfavorable” attitudes about the Republican Party rose from 57 percent in 1994 to 79 percent in 2014, according to a Pew Research Center survey in June called “Political Polarization in the American Public.”

Similarly, the percentage of Republicans who had unfavorable feelings about the Democratic Party climbed from 68 percent to 82 percent.

Most of this increase is due to those who have “very unfavorable” views of other party. Among Democrats, 16 percent of Democrats had “very unfavorable” opinions of the Republican Party in 1994, rising to 38 percent by 2014. Among Republicans, 17 percent had a “very unfavorable” view of the Democratic Party in 1994, rising to 43 percent by 2014.

A follow-up poll by Pew in October found that those with more polarized beliefs are more likely to vote. The effort to stir the passions of the ideologically polarized base so that those people turn out to vote explains the tone of many political advertisements.

A common response to this increasing polarization is to call for providing more unbiased facts. But in a phenomenon that psychologists and economists call “confirmation bias,” people tend to interpret additional information as additional support for their pre-existing ideas.

One classic study of confirmation bias was published in the Journal of Personality and Social Psychology in 1979 by three Stanford psychologists, Charles G. Lord, Lee Ross and Mark R. Lepper. In that experiment, 151 college undergraduates were surveyed about their beliefs on capital punishment. Everyone was then exposed to two studies, one favoring and one opposing the death penalty. They were also provided details of how these studies were done, along with critiques and rebuttals for each study.

The result of receiving balanced pro-and-con information was not greater intellectual humility — that is, a deeper perception that your own preferred position might have some weaknesses and the other side might have some strengths. Instead, the result was a greater polarization of beliefs. Student subjects on both sides — who had received the same packet of balanced information! — all tended to believe that the information confirmed their previous position.

A number of studies have documented the reality of confirmation bias since then. In an especially clever 2013 study, Dan M. Kahan (Yale University), Ellen Peters (Ohio State), Erica Cantrell Dawson (Cornell) and Paul Slovic (Oregon) showed that people’s ability to interpret numbers declines when a political context is added.

Their study included 1,100 adults of varying political beliefs, split into four groups. The first two groups received a small table of data about a hypothetical skin cream and whether it worked to reduce rashes. Some got data suggesting that the cream worked; others got data suggesting it didn’t. But people of all political persuasions had little trouble interpreting the data correctly.

The other two groups got tables of data with exactly the same numbers. But instead of indicating whether skin cream worked, the labels on the table now said the data was showing a number of cities that had enacted a ban on handguns, or had not, and whether the result had been lower crime rates, or not.

Some got data suggesting that the handgun ban had reduced crime; others got data suggesting it didn’t. The data tables were identical to the skin cream example. But people in these groups became unable to describe what the tables found. Instead, political liberals and conservatives both tended to find that the data supported their pre-existing beliefs about guns and crime — even when it clearly didn’t.

In short, many Americans wear information about public policy like medieval armor, using it to ward off challenges.

Of course, it’s always easy to define others as hyperpartisans who won’t even acknowledge basic facts. But what about you? One obvious test is how much your beliefs change depending on the party of a president.

For example, have your opinions on the economic dangers of large budget deficits varied, coincidentally, with whether the deficits in question occurred under President Bush (or Reagan) or under President Obama?

Is your level of outrage about presidents who push the edge of their constitutional powers aimed mostly at presidents of “the other” party? What about your level of discontent over government surveillance of phones and e-mails? Do your feelings about military actions in the Middle East vary by the party of the commander in chief?

Do you blame the current gridlock in Congress almost entirely on the Republican-controlled House of Representatives or almost entirely on the Democratic-controlled Senate? Did you oppose ending the Senate filibuster back in 2006, when Democrats could use it to slow down the Republicans, but then favor ending the filibuster in 2014, when Republicans could use it to slow down Democrats? Or vice versa?

Do big-money political contributions and rich candidates seem unfair when they are on the other side of the political spectrum, but part of a robust political process and a key element of free speech when they support your preferred side?

Do you complain about gridlock and lack of bipartisanship, but then — in the secrecy of the ballot box — do you almost always vote a straight party ticket?

Of course, for all of these issues and many others, there are important distinctions that can be drawn between similar policies at different times and places. But if your personal political compass somehow always rotates to point to how your pre-existing beliefs are already correct, then you might want to remember how confirmation bias tends to shade everyone’s thinking.

When it comes to political beliefs, most people live in a cocoon of semi-manufactured outrage and self-congratulatory confirmation bias. The Pew surveys offer evidence on the political segregation in neighborhoods, places of worship, sources of news — and even in who we marry.

Being opposed to political polarization doesn’t mean backing off from your beliefs. But it does mean holding those beliefs with a dose of humility. If you can’t acknowledge that there is a sensible challenge to a large number (or most?) of your political views, even though you ultimately do not agree with that challenge, you are ill-informed.

A wise economist I know named Victor Fuchs once wrote: “Politically I am a Radical Moderate. ‘Moderate’ because I believe in the need for balance, both in the goals that we set and in the institutions that we nourish in order to pursue those goals. ‘Radical’ because I believe that this position should be expressed as vigorously and as forcefully as extremists on the Right and Left push theirs.”

But most moderates are not radical. Instead, they are often turned off and tuned out from an increasingly polarized political arena.

Timothy Taylor is managing editor of the Journal of Economic Perspectives, based at Macalester College in St. Paul.

Keeping Up with the Joneses on Energy Conservation

The phrase \”Keeping Up with the Joneses\” seems to have become firmly established in U.S. culture as a result of a prominent comic strip by that name which started in 1913 and ran for several decades.
Characters in the cartoon often referred to what the Joneses were doing but you never met them. Usually the term refers to a desire to imitate the higher and more conspicuous consumption levels of one\’s neighbors. But can a desire to follow the neighbors also be harnessed to energy conservation efforts?

Hunt Allcott and Todd Rogers offer evidence on that question in \”The Short-Run and Long-Run Effects of Behavioral Interventions: Experimental Evidence from Energy Conservation,\” in the October 2014 issue of the American Economic Review (104:10, pp. 3003–3037). The AER is not freely available on line, but many readers will have access through library subscriptions. (Full disclosure: The AER is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as Managing Editor.) They write:

We study a widely-implemented and highly-publicized behavioral intervention, the “home energy report” produced by a company called Opower.The Opower reports feature personalized energy use feedback, social comparisons, and energy conservation information, and they are mailed to households every month or every few months for an indefinite period. Utilities hire Opower to send the reports primarily because the resulting energy savings help to comply with state energy conservation requirements. There are now 6.2 million households receiving home energy reports at 85 utilities across the United States.

At some of the utilities, the Opower notices have been implemented as a randomized control trial, which makes it relatively straightforward to compare the behavior of those who receive the notices and those who don\’t. One can also use this data to look at questions like whether the notice might have a short-term effect that fades unless the reminders continue, or a longer-term effect that continues for a time and then fades as people get tired of receiving the notices. Here\’s an example of the front and back of a typical Opower notice:

What effect do these notices have? Allcott and Rogers report that when first receiving a notice, a number of consumers show a quick but short-term reduction in energy use. As people receive more notices, this cycle of reducing consumption and then bouncing back gets smaller. But the repetition of the message seems to have a longer-term effect after two years–that is, people\’s habits have changed in a way that lasts for several more years. In their words:

At first, there is a pattern of “action and backsliding”: consumers reduce electricity use markedly within days of receiving each of their initial reports, but these immediate efforts decay at a rate that might cause the effects to disappear after a few months if the treatment were not repeated. Over time, however, the cyclical pattern of action and backsliding attenuates. After the first four reports, the immediate consumption decreases after report arrivals are about five times smaller than they were initially. For the groups whose reports are discontinued after about two years, the effects decay at about 10 to 20 percent per year—four to eight times slower than the decay rate between the initial reports. This difference implies that as the intervention is repeated, people gradually develop a new “capital stock” that generates persistent changes in outcomes. This capital stock might be physical capital, such as energy efficient lightbulbs or appliances, or “consumption capital”—a stock of energy use habits … Strikingly, however, even though the effects are relatively persistent and the “action and backsliding” has attenuated, consumers do not habituate fully even after two years: treatment effects in the third through fifth years are 50 to 60 percent stronger if the intervention is continued instead of discontinued.

One reason for having utilities encourage energy conservation is because it can be cheaper than building additional electricity generation plants, and facing the broader social costs of the environmental costs of such plants. Electricity in the US across all sectors of the economy now costs about 11 cents per kilowatt/hour, Thus, Allcott and Rogers look at the cost-effectiveness of sending Opower reports, defined \”as the cost to produce and mail reports divided by the kilowatt-hours of electricity conserved.\”

They find that a one-shot Opower notice has has a cost effectiveness of 4.31 cents/kWh. Extending the intervention to two years costs more in sending out additional notices. But the continual reminders keep encouraging people to conserve, and also over that time people build up a pattern of reduced energy consumption. In their words, \”The two-year intervention is much more cost effective than the oneshot intervention, both because people have not habituated after the first report and because the capital stock formation process takes time.\” Thus, the overall cost-effectiveness after about two years is about 1.5 cents/kWh.

After two years, it seems possible to slack off on the reminders, and to send them perhaps twice a year rather than with every monthly bill. This step reduces the cost of sending the reminders, but seems to keep about the same level of effectiveness in terms of holding down energy consumption. Indeed, households don\’t seem to get used to the reports, but seem to keep responding to the notices even after a third and fourth year. As they write: \”However, it is remarkable how little cost effectiveness decreases after two years, suggesting strikingly little habituation.\”

I\’m not surprised that these kinds of notices about how much electricity the Joneses are using have a short-run effect. But the surprise in these results is that people keep chasing the Joneses toward greater electricity conservation even after several years of receiving these notices. It makes one wonder if there aren\’t some other ways in which information and social pressure, discreetly and privately applied, might be used in the service of environmental goals.