Behavioral Economics and Regulation

Back in 2008, Cass R. Sunstein wrote a book with Richard Thaler called Nudge: Improving Decisions About Health, Wealth, and Happiness. The focus of the book was to discuss how to take findings from behavioral economics and apply them to affecting behavior. Thus, since President Obama appointed Sunstein to be the  Administrator, Office of Information and Regulatory Affairs, Office of Management and Budget, there has been considerable interest to see how he might put this approach into effect. In the Fall 2011 issue of the University of Chicago Law Review, Sunstein, has written \”Empirically Informed Regulation,\” which discusses his approach and a selection of the policy results.

Sunstein starts this way (footnotes omitted): \”In recent years, a number of social scientists have been
incorporating empirical findings about human behavior into economic models. These findings offer useful insights for thinking about regulation and its likely consequences. They also offer some suggestions about the appropriate design of effective, low-cost, choice-preserving approaches to regulatory problems, including disclosure requirements, default rules, and simplification. A general lesson is that small, inexpensive policy initiatives can have large and highly beneficial effects.\” Here are a few examples of the issues and possibilities that he raises for such an approach:

  • \”In the domain of retirement savings, for example, the default rule has significant consequences. When people are asked whether they want to opt in to a retirement plan, the level of participation is far lower than if they are asked whether they want to opt out. Automatic enrollment significantly increases participation.\”
  • \”For example, those who are informed of the benefits of a vaccine are more likely to become vaccinated if they are also given specific plans and maps describing where to go. Similarly, behavior has been shown to be significantly affected if people are informed, not abstractly of the value of “healthy eating,” but specifically of the advantages of buying 1 percent milk as opposed to whole milk.\”
  • \”When patients are told that 90 percent of those who have a certain operation are alive after five years, they are more likely to elect to have the operation than when they are told that after five
    years, 10 percent of patients are dead. It follows that a product that is labeled “90 percent fat-free” may well be more appealing than one that is labeled “10 percent fat.”\”
  • \”In some contexts, social norms can help create a phenomenon of compliance without enforcement—as, for example, when people  comply with laws forbidding indoor smoking or requiring the buckling of seat belts, in part because of social norms or the expressive function of those laws.\”
  • \”Many people believe that they are less likely than others to suffer from various misfortunes, including automobile accidents and adverse health outcomes. One study found that while smokers do not underestimate the statistical risks faced by the population of smokers, they nonetheless believe that their personal risk is less than that of the average nonsmoker.\”

The wave of behavioral economics research seems to me one of the most intriguing and fruitful developments in economics in the last few decades.  However, in thinking about its value as a method of improving regulation, I often find myself feeling skeptical. Although there is much to praise in Sunstein\’s essay and approach to regulation, let me focus here on raising four skeptical questions.


1) How big a deal is this combination of behavioral economics and regulation?

The work on how people\’s savings patterns are affected by whether they face a default rule seems to me the shining success of behavioral economics as applied to policy. It addresses an issue of first-order importance that cuts across macroeconomics, microeconomics, and social policy: Why do so many people save so little? 

However, a number of the other applications seem to me relatively small potatoes. For example, at one point Sunstein lists nine examples of regulations that have been simplified or eliminated. If you add together his estimated cost savings for all nine rules, it\’s about $1 billion per year. I\’m in favor of saving that $1 billion each year! But in the context of federal regulation and the U.S. economy, it\’s not a large amount.

2) Does behavioral economics imply more regulation, or just offer suggestions for better regulation?

Sunstein clearly takes the second position: \”An understanding of the findings outlined above does not, by itself, demonstrate that “more” regulation would be desirable. … It would be absurd to say that empirically informed regulation is more aggressive than regulation that is not so informed, or that an understanding of recent empirical findings calls for more regulation rather than less. The argument is instead that such an understanding can help to inform the design of regulatory programs.\”

3) How well can the government apply these lessons?

There are reasons to doubt how well government can apply these insights as it goes about its regulatory tasks. As Sunstein writes: \”It should not be necessary to acknowledge that public officials
are subject to error as well. Indeed, errors may result from one or more of the findings traced above; officials are human and may also err. The dynamics of the political process may or may not lead in the right direction.\”

Consider for a moment a seemingly simple policy, like improved disclosure requirements. What rule should be followed. Here\’s how Sunstein phrases it:  \”Disclosure requirements should be designed for homo sapiens, not homo economicus (the agent in economics textbooks). In addition, emphasis on certain variables may attract undue attention and prove to be misleading. If disclosure requirements are to be helpful, they must be designed to be sensitive to how people actually process information.
A good rule of thumb is that disclosure should be concrete, straightforward, simple, meaningful, timely, and salient.\”

Just how to apply this perspective in the case of say, the USDA food pyramid or health warnings on cigarette packages or public information on toxic chemical releases is not going to be straightforward. It made me smile that at the back of Sunstein\’s paper, there is an appendix about \”open and transparent government\” that takes 12 pages of bureaucratese to explain what the term means.
 Disclosure requirements and other regulations are going to be the subject of intense lobbying, and there will be pressure from many parties to make people feel as if their politicians are being public-spirited and responsive, while continuing to conceal relevant costs and tradeoffs. 

4) Is overcoming these issues unambiguously beneficial?

An often-unspoken assumption in this literature is that people are always better off if they have better information, or better disclosure rules, or a more accurate perception of risk. This isn\’t necessarily so.  For example, a recent working paper by Jacob Goldin at Princeton\’s Industrial Relations Center tackles the issue of \”Optimal Tax Salience.\”  The paper is technical, but under the math is a basic intuition: if people are unaware that their marginal tax rate is rising, then they will not cut back as much on work effort. In that narrow sense, the costs of higher tax rates would be reduced. Goldin makes a case that having a mixture of taxes that are more and less salient may actually end up being better for society.

There\’s no reason government shouldn\’t be able to learn from the management and marketing literature about how to affect people\’s behavior. If government is going to impose a regulation, it should be designed to work better rather than worse. And yet, the point of departure of behavioral economics is that people don\’t always know very clearly what they want. People are affected by how questions are framed, by what information they have, by default rules, by how risks are perceived, by whether costs and benefits are immediate or long-term, and by social norms. Most of us recognize that private sector actors try to manipulate our decisions through these factors, and we are rightly skeptical that  they are doing so in our own best self-interest.

Thus, I find that I tend to be more comfortable with clear-cut government actions, like readily apparent taxes and subsidies, regulations that set certain standards or forbid certain activities, or default rules where the possibility of opting-out is clearly stated. There is some virtue in having government be clunky and apparent in its actions; conversely, a government that views its task as to be more subtle and manipulative, affecting choices in ways that people can\’t easily perceive, seems to me a potential cause for concern. For example, I\’m more comfortable with a tax on gasoline or on carbon than I am with government attempting to discourage fossil fuel use by providing the public with what some government agency has decided is the relevant, meaningful, timely, and salient information.

The Price of Nails

I just ran across a delightful working paper by Daniel Sichel of the Federal Reserve that was presented at several seminars last summer: \”Everyday Products Weren\’t Always that Way: Prices of Nails and Screws since about 1700.\” Here\’s a version presented in July 2011; here\’s a version from an October presentation. I\’ll focus here on the price of nails. Sichel writes:

\”Using the preferred price index developed in this paper, the real price of nails on a quality adjusted basis fell—relative to a broad bundle of consumption goods as measured by the overall CPI—by a factor of about 15 from its peak in the mid-1700s to the middle of the 20th century, averaging a decline of 1.3 percent a year. (Prices have risen some in the past several decades.) …

\”[T]oday, a nail-making machine with a footprint of about three feet square [can]  produce 300 to 450 nails per minute. If we assume that one worker can operate 4 machines at once and that each machine produces 350 nails a minute, then labor productivity of nail production has increased by a factor of 1400 times since the era of hand-forged nails when it took a worker about a minute to produce a nail. With most of this change occurring over the period from 1790 to 1940, the annual rate of increase in labor productivity was nearly 5 percent a year …\”

Here\’s an illustrative figure. The colors show the primary changes in nail technology over time, from hand-forged nails, to a mixture of forged and cut nails, to the predominance of cut nails, to the modern wire nails. (In interpreting the graph, notice that the real price on the vertical axis is a log scale for cents/nail.)

This dramatic change in productivity of nail production has the implication that nails were far more expensive in relative terms back in the 1700s. Sichel offers a number of vivid anecdotes and statistics to support this claim. For example:

  • \”[T]he dome of the Maryland State Capitol, completed in 1788 and made largely of wood, was joined together with no nails but rather with wooden pegs and iron straps. Presumably, this choice was made, at least in part, because of the high cost and limited availability of nails at the time.\”
  • \”The high value of nails during the 1700s is highlighted by the practice of burning down abandoned buildings to facilitate recovery of the nails …\”
  • \”[T]his paper also reports domestic absorption of nails, going back to 1810. At that time—more than 20 years after the Maryland State Capitol was completed—nails are estimated to have amounted to about 0.4 percent of nominal GNP. In today’s terms, this share is similar to that of household purchases of personal computers and peripherals or of airfares. As prices plunged during the 1800s, domestic absorption rose dramatically. But, as a share of nominal GDP, domestic absorption of nails, which once were quite important, have become de minimus. So, while nails appear everyday today, that perception reflects a couple hundred years of significant declines in their relative price.\”
  • \”In 1798, a relatively simple house (24’ x 36’ with 7 windows) in Warren, Connecticut was valued at $50. … This house likely was built with few nails, but, as a thought experiment, let’s suppose that it were built primarily with nails rather than other joinery. … Suppose that the 1798 house would have required 50 pounds of nails … Given nail prices in 1789 of $12.00 per hundred pounds, the nails for that 1798 house would have cost $6.00, more than 10 percent of the value of the house!\”

While the price of nails themselves hasn\’t fallen in the last half-century, Sichel makes several interesting points. First, the variety of nails has risen and there have been a number of quality improvements, like rust-proofing nails and adding rings around the shank of the nail to improve holding power. In addition, Sichel argues that the invention of the nail-gun has caused the price of an installed nail to continue falling substantially. He uses back-of-the-envelope calculations to suggest that the price of an installed nail using a nail-gun is about 60% lower than the price of a hand-hammered nail–which suggests that the price of an installed nail is near its all-time low right now.

Maintaining Serendipity: Entire Issues of JEP for your E-reader

Back in 2010, the American Economic Association decided to make the articles in my own Journal of Economic Perspectives freely available, ungated and without a password. At present, the issues from the most recent Winter 2012 back to Winter 1994 are freely available on-line.

A new feature is now available. Until now, what has been available is a list of individual articles. However, you can now freely download the entire Winter 2012 issue of JEP in various formats: PDF, Kindle, and ePub. Moreover, if you want entire issues of JEP automatically delivered to your Kindle, it is now possible to subscribe through Amazon. (Amazon doesn\’t provide free distribution service, but the AEA has negotiated to keep the price as low as possible.)

I was strongly in favor of making the JEP freely available, so that the articles could be widely disseminated and easily linked. However, I have had some concern that if a journal becomes just a collection of downloadable articles, readers might be less likely to sample individual articles within an issue, instead of just focusing in on the specific article that they want. But the technology for e-readers is moving faster than my fears. I suspect that in the not-too-distant future, most of us will receive most of our magazines and journals sent straight to the e-reader of our choice, fully formatted, and with graphics and ads included. The potential for serendipity–finding that intriguing article for which you didn\’t know that you were looking–will be maintained.

International Trade Within Regions

One of the great strengths of the U.S. economy has long been its enormous internal market. In addition, in the last few decades the U.S. has extended this open regional trading area to embrace Canada and Mexico.  Some other regions of the world, like Europe and Asia, also have a high degree of intra-regional trade. However, in Africa, Latin America, and the Middle East, trade within the region is quite limited.

The table below shows merchandise trade by regions: the rows show merchandise exports from countries within a region while the column show merchandise imports to countries each region. Thus, the diagonal cells show exports from and to the same region. (Thanks to Danlu Hu for putting together this table from World Trade Organization data available here in Table 1.4.)

For example, look across the \”Europe\” row, showing the destinations by region of merchandise trade leaving Europe. Well over half of exports leaving countries of Europe end up being imported by other countries of Europe. Or look across the \”Asia\” row, where slightly more than half of all merchandise exports from Asian countries end up as imports in other Asian countries. In the North American region, a little less than half of the merchandise exports of the region end up being imported by other countries in the region, but of course, one reason this figure is lower than in Europe is the existence of the huge internal U.S. market. Trade from Germany to France shows up in these statistics; trade between California and Texas does not.

But now look at the other regions. In Africa, for example, the absolute level of trade is quite low by comparison with other regions. But what jumps out from these statistics is that only about one-eighth of the merchandise exports from Africa end up in other African nations. Similarly, in the Middle East, only about one-eighth of the merchandise exports from countries in the region end up as imports to other countries in the region. In South and Central America, only about one-third of the merchandise exports from countries in the region end up as imports to other countries in the region.

The modern theory of international trade emphasizes the benefits that trade brings in terms of economies of scale of production, greater variety, and greater competitive pressures for raising productivity. Regions with such low levels of intra-regional trade are missing these benefits, as a number of disparate commenters have noticed.

In the case of Latin America, for example, the Economist magazine had a March 10 leader called: \”Trade in Latin America–Unity is strength— Regional integration, not protectionism, is the right response to fears of deindustrialisation.\” 


\”Brazil should be leading a new push to tear down barriers within Latin America as a whole. Consider its agreement with Mexico. The car industry in both countries has benefited because, by offering a larger market and more economies of scale, it has encouraged specialisation. That, in a nutshell, is the case for regional economic integration. Yet, despite a torrent of rhetoric and a mountain of presidential summits in recent years, integration has languished. Latin American countries export much less to their neighbours than do their counterparts in other continents. Huge distances are partly to blame. But trade is also checked by higher tariffs, hold-ups at customs, a tangled skein of separate trade agreements and poor transport links.\”

Indeed, a recent paper by José Peres Cajías, Marc Badia-Miró, and Anna Carreras-Marín called \”Intraregional trade in South America, 1913-50. Economic linkages before institutional agreements\”

points out that this is a long-standing issue in the Latin American region, and that a larger share of exports from Latin America ended up as imports to other Latin American countries back in the 1940s than occurs today.

In the case of Africa, I posted last December 15 about Africa\’s Prospects: Half Full or Half Empty?, and one of the themes in the \”half-empty\” category is the enormous infrastructure deficits, especially in railroads and electricity, that hold back economic integration across Africa. 

In the Middle East, I posted on January 27 about a report on \”The economics of the Arab Spring,\” which among other points emphasized: \”With a population of 350 million people that share a common language, culture, and a rich trading civilization, the Arab world doesn\’t function as one common market. … Few Arab countries consider their neighbors as their natural trading partners. Pan-Arab trade is noticeably insignificant. Despite having tripled between 2000 and 2005, the share in intra-Arab trade in total merchandise trade still hovers around 10 percent. … The share of intra-Arab imports, despite having fluctuated widely, is only marginally higher than that in 1960. … Even this limited trade is geographically clustered, with countries in the Gulf and North Africa trading predominantly within their own sub-regions. … It is ironical that a region that connects Asian merchants with European markets is itself stuck in primary production. Everywhere in the world proximity to coasts tends to be associated with lower transport costs and better access to global markets. The Arab world defies these forces of gravity, however.\”

Across Latin America, Africa, and the Middle East, arguments about the problems of international trade often focus on concerns about being exploited by high-income economies. Whatever the (dubious) merits of these claims in the modern economy, such arguments about trade with high-income countries don\’t explain why these regions have so little intra-regional trade.  For these regions, it might make sense to put the Doha round of the World Trade Organization talks on the back burner–after all, those talks have now been lingering on since 2001 without a resolution in sight. Instead, they should set aside the bogeyman of trade with high-income countries, and make a real effort to create the legal, regulatory, transportation, communication, and financial infrastructure to make serious gains in intra-regional trade.

The Problem of Low-Wage Jobs

John Schmitt discusses \”Low-wage Lessons\” in a January 2012 paper written for the Center for Economic and Policy Research.

Define \”low-wage jobs\” as those that involve earning two-thirds or less of the median hourly wage: that is, those earning less than about $10/hour. As Schmitt notes: \”If low-wage work were a short-term state that helped connect labor-market entrants or re-entrants to longer-term, well-paid employment, high shares of low-wage work would be less of a social concern. Indeed, if low-wage work facilitated transitions from unemployment to well-paid jobs, countries might want to encourage the creation of a low-wage sector to improve workers’ welfare in the long term.\” On the other side, if low-wage jobs are a near-permanent state of affairs for a substantial group of workers, or if such jobs even send a negative signal to potential future employers that this worker is going to have low productivity, then the prevalence of low-wage jobs may be of real policy concern.

Given the rising levels of inequality in the U.S. economy in recent decades, it\’s not a big surprise that the share of workers who can be classified as \”low-wage\” has been rising, from about 22% of the workforce in 1979 to about 28% of the workforce by 2009.

Moreover, the share of U.S. workers who are low-wage is considerably higher than in many other high-income countries. About one-quarter of U.S. workers are low-wage, compared with 20-21% in the UK, Canada and Germany; about 15% in Japan; and 8% in Norway and Italy.

The issue here can be summed up with this question: If someone in the U.S. economy is a law-abiding citizen who works full-time for a period of years, can they earn a level of wages that let them afford a slice of middle-class standard of living? If you are earning $10/hour and working 2,000 hours per year, your annual earnings of $20,000 would put you below the poverty line of $22,891 for a single parent with three children

And the problems of low-wage work aren\’t limited to low wages. Schmitt writes: \”Not only are low-wage workers likely to stay in low-wage jobs from one year to the next, they are also more likely than workers in higher-wage jobs to fall into unemployment or to leave the labor force altogether. … U.S. labor law offers workers remarkably few protections. U.S. workers, for example, have the lowest level of employment security in the OECD and no legal right to paid vacations, paid sick days, or paid parental leave. … [M]ore than half (54 percent) of workers in the bottom wage quintile did not have employer-provided health insurance and more than one-third (37 percent) had no health insurance of any kind, private or public.\”

It\’s worth noting that labor force participation rates for men aged 16-24 have fallen from 72% in 1990 to 57% in 2010, and for men from 25-54, the labor force participation rate has fallen from 93% in 1990 to 89% in 2010, according to Bureau of Labor Statistic data.  Much of this is due to the low pay available to those with low skill levels.

Schmitt only sketches his policy suggestions here, which include higher rates of unionization, higher minimum wages, employment-protection legislation and other national labor laws, along with higher benefits for the jobless and low-income households. He less of a fan of the Earned Income Tax Credit, fearing that employers capture much of the benefit of the credit because it allows them to pay lower wages than they otherwise would. For my own part, dramatically higher rates of unionization would fly in the face of a half-century trend in the U.S. (see this post for some details).  While I\’m comfortable with the minimum wage playing some role in the labor market, jacking it up by 50% or more seems to me unwise.  I\’m an enthusiastic supporter of the EITC, and a cautious supporter of certain national legislation to improve employment benefits and conditions.

But my purpose here is not to argue policy, but only to point out that the U.S. labor market seems to be producing an outcome where a substantial and growing proportion of full-time employees earn barely enough to creep above the poverty line. If we wish to build a society and an economy on rewarding work, it is a harsh fact of U.S. labor markets that such a reward is currently not apparent for many.

Too Big To Fail: How to End It?

Harvey Rosenblum has written \”Choosing the Road to Prosperity: Why We Must End Too Big to Fail—Now,\” in the 2011 Annual Report of the Federal Reserve Bank of Dallas. He does a good job of explaining the \”why,\” but–perhaps constrained by his position at the Fed–pretty much whiffs on the question of \”how.\”

Rosenblum points out that \”too big to fail\” is now de facto national policy (footnotes and references to exhibits omitted):  \”In short, the situation in 2008 removed any doubt that several of the largest U.S. banks were too big to fail. At that time, no agency compiled, let alone published, a list of TBTF institutions. Nor did any bank advertise itself to be TBTF. In fact, TBTF did not exist explicitly, in law or policy—and the term itself disguised the fact that commercial banks holding roughly one-third of the assets in the banking system did essentially fail, surviving only with extraordinary government
assistance. Most of the largest financial institutions did not fail in the strictest sense. However, bankruptcies, buyouts and bailouts facilitated by the government nonetheless constitute failure. The U.S. financial institutions that failed outright between 2008 and 2011 numbered more than 400—the most since the 1980s.\”

Moreover, enabling banks with an overdose of toxic assets to stagger forward makes it hard for monetary policy to then use those banks as part of a mechanism for stimulating lending and the macroeconomy: \” Bank capital is an issue of regulatory policy, not monetary policy. But monetary policy cannot be effective when a major portion of the banking system is undercapitalized. The machinery of monetary policy hasn’t worked well in the current recovery. The primary reason: TBTF financial institutions. Many of the biggest banks have sputtered, their balance sheets still clogged with
toxic assets accumulated in the boom years.\”

So far, the policy response to \”too big to fail\” has taken two forms: promising not to do it again, and higher capital requirements. Neither is likely to put an end to \”too big to fail.\” 

The Dodd-Frank legislation promises no more bank bailouts–but why would anyone believe such a vow? 

\”Dodd–Frank says explicitly that American taxpayers won’t again ride to the rescue of troubled financial institutions. … Going into the financial crisis, markets assumed there was government backing for Fannie Mae and Freddie Mac bonds despite a lack of explicit guarantees. When push came to shove, Washington rode to the rescue. Similarly, no specific mandate existed for the extraordinary governmental assistance provided to Bear Stearns, AIG, Citigroup and Bank of America in the midst of the financial crisis. Lehman Brothers didn’t get government help, but many of the big institutions exposed to Lehman did. Words on paper only go so far. …

\”While decrying TBTF, Dodd–Frank lays out conditions for sidestepping the law’s proscriptions on aiding financial institutions. In the future, the ultimate decision won’t rest with the Fed but with the Treasury secretary and, therefore, the president. The shift puts an increasingly political cast on whether to rescue a systemically important financial institution …The credibility of Dodd–Frank’s disavowal of TBTF will remain in question until a big financial institution actually fails and the wreckage is quickly removed so the economy doesn’t slow to a halt. Nothing would do more to change the risky behavior of the industry and its creditors. For all its bluster, Dodd–Frank leaves TBTF entrenched.\”

Higher capital requirements will reduce some of the advantage of giant banks: \”Policymakers can make their most immediate impact by requiring banks to hold additional capital, providing added protection against bad loans and investments. … TBTF banks’ sheer size and their presumed guarantee of government help in time of crisis have provided a significant edge—perhaps a percentage point or more—in the cost of raising funds. Making these institutions hold added capital will level the playing field for all banks, large and small.\”

But higher capital requirements are mainly aimed at reducing the need for bank bailouts, one bank at a time. In a systemic financial crisis, they may well not suffice: \”A nightmare scenario of several big banks requiring attention might still overwhelm even the most far-reaching regulatory scheme. In all likelihood, TBTF could again become TMTF—too many to fail, as happened in 2008.\”

It seems to me that if one is deeply serious about stopping too big to fail, two other policies need to be considered. One possibility would be a policy of \”narrow banking,\” where banks are perhaps limited to commercial banking, investment banking, and wealth management, but are barred from running hedge funds, or being dealers or market makers in financial securities.  I posted about one such proposal along these lines a couple of weeks ago, in \”What Should Banks Be Allowed To Do?\”  Such \”narrow\” banks would take on less risk, and would also be limited to operating in a smaller part of the overall market for financial services

The other possibility is to put a limit on how big banks can grow. Rosenblum writes in a footnote: \”Evidence of economies of scale (that is, reduced average costs associated with increased size) in banking suggests that there are, at best, limited cost reductions beyond the $100 billion asset size threshold. Cost reductions beyond this size cutoff may be more attributable to TBTF subsidies enjoyed by the largest banks, especially after the government interventions and bailouts of 2008 and 2009.\”

Rather than letting TBTF provide an implicit subsidy for greater size, the government could require that when a bank grows to, say, $200 billion in assets, it needs to develop a plan for splitting itself in two, and when a bank approaches $300 billion in assets, it needs to put that plan into effect. For banks, an advantage of such a proposal is that their activities would not need to be restricted, because the failure of even a few such banks wouldn\’t be catastrophic. For perspective, JPMorgan Chase and BankAmerica both had more than $2 trillion in assets in 2011, while Citigroup and Wells Fargo both had well over $1 trillion in assets. Too big to fail, indeed.

Japan Has a Trade Deficit?!?

Japan ran a merchandise trade deficit in 2011! I missed the news when it was announced in January, and could hardly believe it. Japan running large trade surpluses has been one of the few constants of the last several decades, through good times and bad.  Here\’s an illustrative figure from the Daily Yomiuri:

Of course, this event comes with some \”buts\” attached. The trade deficit largely arose from a onetime event: Japan\’s horrendous earthquake and tsunami in March 2011, which led industrial production and exports to fall while imports of natural gas rose. In addition,this trade deficit only involves merchandise trade: if one looks at the overall current account balance, which also includes income from foreign investments, Japan still shows a surplus.

But even if Japan returns to merchandise surpluses in 2012 or 2013, the days of perpetual surpluses in Japan seem numbered. If one squints just a bit at the figure above, one can imagine an overall downward trend in those trade surpluses since the late 1990s. At a fundamental level, a trade surplus means that an economy is producing more than it is consuming–and exporting the rest. But Japan is a rapidly aging society with a low birthrate where the size of the workforce topped out in 1998 and has been shrinking since then.  Japan\’s government forecasts that the total population of the country will decline by one-quarter in the next 40 years, while the share of Japan\’s population over age 65 will rise
from about 23% now to almost 40% by 2050. With this demographic outlook, it seems likely that Japan will become a country that will start to live off some of its vast accumulated savings, consuming more than it produces and running trade deficits, in the not-too-distant future. 

FX Markets: $4.7 Trillion Per Day?

For the uninitiated, when economists refer to FX, the abbreviation doesn\’t mean \”special effects\” or \”Federal Express\” or \”Fighter, Experimental.\” It means \”foreign exchange.\” Morten Bech discussed \”FX volume during the financial crisis and now\” in the March 2012 issue of the BIS Quarterly Review. BIS stands for Bank of International Settlements, an organization whose members are central banks and some international organizations, which among other tasks holds conferences, collects data, and facilitates some financial transactions.

In particular, BIS produces the Triennial Central Bank Survey of Foreign Exchange and Derivatives Market Activity, which comes out every three years. The most recent version found that FX trading activity averaged $4.0 trillion a day in April 2010. Yes, that\’s not million or billion per day, or trillion per year, but $4 trillion per day.  I\’ll say a bit more about the implications of that remarkable total in a moment, but Bech\’s main task is to look at the underlying data for the three-year survey and thus find a way of estimating the FX market at semiannual or even monthly intervals. Bech writes:

\”By applying a technique known as benchmarking to the different sources on FX activity, I produce a monthly time series that is comparable to the headline numbers from the Triennial going back to 2004. Taking stock of FX activity during the financial crisis and now I estimate that in October 2011 daily average turnover was roughly $4.7 trillion based on the latest round of FX committee surveys. Moreover, I find that FX activity may have reached $5 trillion per day prior to that month but is likely to have fallen considerably into early 2012. Furthermore, I show that FX activity continued to grow during the first year of the financial crisis that erupted in mid-2007, reaching a peak of just below $4.5 trillion a day in September 2008. However, in the aftermath of the Lehman Brothers bankruptcy, activity fell substantially, to almost as low as $3 trillion a day in April 2009, and it did not return to its previous peak until the beginning of 2011. Thus, the drop coincided with the precipitous fall worldwide in financial and economic activity in late 2008 and early 2009.\”

Here\’s a result of Bech\’s work. The horizontal axis is years; the vertical axis is the size of the FX market. The green line plots the results for 2004, 2007, and 2010 from the Triennial Survey. The red line uses the benchmarking technique to create a semi-annual data series, and the blue line builds on that to create a monthly data series. The vertical lines refer to August 9, 2007, when the first really bad news about the financial crisis hit world financial markets, and September 15, 2008, the middle of what was arguably the worst month of the crisis.

As Bech points out: \”The FX market is one of the most important financial markets in the world. It
facilitates trade, investments and risk-sharing across borders.\” In that spirit, his result interest me in several ways.

First, I\’m always on the lookout for ways to illustrate the effect of a global financial crisis in ways that don\’t involve trying to explain interest rate spreads to students. Seeing the size of the foreign exchange market contract by one-quarter or so in late 2008 and early 2009 is a useful illustration. There are four more graphs for illustrating the financial crisis in this blog post of last August 3, and two more in this post of May 17.

Second, it\’s useful to compare the size of foreign exchange markets at $4.7 trillion per day to the size of world trade. World exports were about $15 trillion for the year of 2010, according to the World Trade Organization. Thus, only a tiny part of the foreign exchange markets are involved in financing imports and exports. Instead, by far the most important part of the foreign exchange markets involves international financial investing. This insight helps to explain why FX markets are so notoriously volatile: they are a financial market where international capital markets are continually rushing in and out of currencies. The volatility suggests that those who are involved in international trade might often do well to lock in future values for foreign exchange in futures and derivatives markets–and of course, part of what makes the FX market so big is the efforts by all parties to hedge themselves against large movements in exchange rates.

Over time, there does appear to be a tendency for foreign exchange rates to move in the general direction that reflects their purchasing power–the so-called \”purchasing power parity\” exchange rate. In the Fall 2004 issue of my own Journal of Economic Perspectives, Alan M. Taylor and Mark P. Taylor review \”The Purchasing Power Parity Debate,\” and find that such movements do occur over the long-run, but they proceed slowly, over a period of several years, and in the meantime exchange rates are buffeted by changing investor sentiments and current events.

Dissecting U.S. Inequality in International Perspective

We all know that the United States has the highest level of income inequality of any high-income country. Right? But at least according to OECD statistics, this claim is only true if one looks at inequality after taxes and transfers. If one looks at inequality before income and taxes, the U.S. economy has less inequality than  Germany, Italy, and the United Kingdom, and about the same amount of inequality as France. The OECD data also offers a hint as to why this unexpected (to me, at least) outcome occurs.

Start with the OECD numbers. The OECD uses the Gini coefficient to measure income inequality across high income. For an earlier post with an intuitive explanation and definition of the Gini coefficients, see here. For present purposes, it suffices to say that a Gini coefficient is a way of measuring inequality that theoretically can range from a score of zero for perfect equality, where everyone has exactly the same income, to a score of one for a situation of complete inequality, where one person receives all the income.

Here\’s a compilation of Gini coefficients from OECD data  with the United States in the top row, followed by Canada, France, Germany, Italy, Japan, Sweden, and the United Kingdom. The OECD data for the second column is here, and data for the other columns is available by toggling the \”Income and population measures\” box at the top. All data is for the latest year available.  (Thanks to Danlu Hu for putting together the table.)

As noted above, the U.S. has the highest Gini coefficient of these eight comparison countries if measured after taxes and transfers (second column), but not if measured before taxes and transfers (first column). However, a hint as to why this arises can be found in the last four columns, which break down the Gini coefficients by the working age population and the over-65 population.

When it comes to the working age population, before taxes and transfers, the U.S. level of inequality is third-highest, but virtually tied for first with the United Kingdom and Italy. After taxes and transfers, the U.S. level of inequality among the working age population clearly the highest.

When it comes to the over-65 population, before taxes and transfers, the U.S. has a far more equal distribution of income than France, Germany, and Italy. I haven\’t dug down into the data here, but I suspect that these numbers are reflecting that a much larger share of over-65 workers are still in the labor force in the U.S. economy–which makes the distribution more equal before taxes and transfers.

Taxes and transfers make the over-65 distribution of income far more equal in all eight countries, but the U.S. stands out as by far the least equal, followed by Japan, with both well behind the other six countries.

These patterns are consistent with a finding from an OECD report published last fall called  Divided We Stand: Why Inequality Keeps Rising.  I blogged about it on December 16, 2012, in \”Government Redistribution : International Comparisons.\” One theme of the report is that the extent of government redistribution across populations is driven much more by the widespread provision of government benefits than by the progressivity of taxation. As the OECD report stated: \”Benefits had a much stronger impact on inequality than the other main instruments of cash distribution — social contributions or taxes. … The most important benefit-related determining factor in overall distribution, however, was not benefit levels but the number of people entitled to transfers.\”

The Evolving World Production Function

Robert Allen starts off his article on \”Technology and the great divergence: Global economic development since 1820\” by asking a classic question: Why have low-income countries been seemingly so slow to adopt the technologies for increased production that exist in high-income countries? The article appears in the January 2012 issue of Explorations in Economic History.  At least for now, Elsevier is allowing the article to be freely available here, but many academics will also have access through their libraries.

Some of the possible answers are that cultural factors, perhaps like Weber\’s \”Protestant work ethic,\” cause some countries rather than others to adopt new technology. Or perhaps institutional factors like a legacy of property rights and representative government make some countries likelier to develop technology. Allen argues a different view: \”This paper explores an alternative explanation of economic development based on the character of technological change itself. While the standard view assumes that technological progress benefits all countries, this paper contends that much technological
progress has been biased towards raising labor productivity by increasing capital intensity. The new technology is only worth inventing and using in high wage economies. At the same time, the new technology ultimately leads to even higher wages. The upshot is an ascending spiral of progress in rich countries, but a spiral that it is not profitable for poor countries to follow because their wages are low.\”

Simple examples of this phenomenon abound. It is cost effective to install price scanners in U.S. supermarkets, because it saves the time of cashiers, as well as purchasing and accounting workers behind the scenes. But for a low-income country with much lower wages, saving the time of workers isn\’t worth such an investment. Multiply this example all across the economy.

Using data on capital per worker and on GDP per worker across countries at different periods of time, Allen estimates a world production function. Here\’s is the evolution of the world production function for the period from 1820-1913, and from 1913 to 1920.

These production functions display some common patterns. On the far left, GDP per capita rises in a more-or-less linear way with capital per worker. On the right, at the technological frontier, GDP per capita doesn\’t rise with capital per worker. Over time, the technological frontier–where the gains from additional capital per worker don\’t add to per capita output–keeps rising. For example, the production function flattens out at about $2000 per worker in 1820, at about $4500 per worker in 1913, $17,000 per worker in 1965, and $35,000 per worker in 1990. Allen suggests that the technological leaders grow by stages, taking a generation or two to perfect the possibilities of one level of capital per worker, before then pushing further up the scale.

In this perspective, technology is quite transferable between countries with roughly similar capital to worker ratios: for example, this helps to explain the convergence in per capita GDP among high-income economies in recent decades. However, low-income countries find that the technology invented by high-income countries inappropriate for their circumstances; indeed, less capital-intensive technology from 50 or 100 years ago often seems more appropriate for them. This perspective also helps to explain why a ultra-high savings rate has often been so important as a precursor to rapid growth in places like Japan in mid-twentieth-century, and then to the East Asian \”tiger\” economies, and then to China. High savings creates a high capital to worker ratio, and thus makes it much more possible to leapfrog forward by adopting technologies closer to the frontier.

Looking ahead, an intriguing question is whether rapidly emerging economies around the world can become their own source of innovation: that is, can they take their high savings rates and draw upon world technological expertise to create a new kind of cutting-edge innovation aimed at their own home market. Can the emerging countries forge their own technological path? The Economist magazine has been predicting for the last couple of years that this process is now underway. For example, the April 15, 2010 issue had a lengthy \”Special Report\” called \”The new masters of management: Developing countries are competing on creativity as well as cost. That will change business everywhere.\” Here\’s a flavor of the argument:

\”Thirty years ago the bosses of America’s car industry were shocked to learn that Japan had overtaken America to become the world’s leading car producer. They were even more shocked when they visited Japan to find out what was going on. They found that the secret of Japan’s success did not lie in cheap labour or government subsidies (their preferred explanations) but in what was rapidly dubbed “lean manufacturing”. While Detroit slept, Japan had transformed itself from a low-wage economy into a hotbed of business innovation. Soon every factory around the world was lean—or a ruin. …

\”Now something comparable is taking place in the developing world…. Emerging countries are no longer content to be sources of cheap hands and low-cost brains. Instead they too are becoming hotbeds of innovation, producing breakthroughs in everything from telecoms to carmaking to health care. They are redesigning products to reduce costs not just by 10%, but by up to 90%. They are redesigning entire business processes to do things better and faster than their rivals in the West.

\”As our special report argues, the rich world is losing its leadership in the sort of breakthrough ideas that transform industries. This is partly because rich-world companies are doing more research and development in emerging markets. Fortune 500 companies now have 98 R&D facilities in China and 63 in India. IBM employs more people in developing countries than in America….

\”Even more striking is the emerging world’s growing ability to make established products for dramatically lower costs: no-frills $3,000 cars and $300 laptops may not seem as exciting as a new iPad but they promise to change far more people’s lives. This sort of advance—dubbed “frugal innovation” by some—is not just a matter of exploiting cheap labour (though cheap labour helps). It is a matter of redesigning products and processes to cut out unnecessary costs. In India Tata created the world’s cheapest car, the Nano, by combining dozens of cost-saving tricks.\”

This scenario suggests that in the future, technological change may not just disseminate gradually from the advanced countries to the rest of the world, as countries build up their capital/labor ratios. Instead, technological change and its effects may also be disseminating from the huge emerging markets back to consumers and firms in high-income countries.

_______________

Added note:

Louis Johnston writes from the College of St. Benedict at St. John\’s University to tell me that Robert Allen\’s article is also Chapter 4 of  Allen\’s recent book Global Economic History: A Very Short Introduction (http://amzn.com/0199596654).