Editor Hell

At the end of a long day at my job as the Managing Editor of the Journal of Economic Perspectives, it\’s always pleasant to consider those editors whose lives are harder than my own. 

Consider the editors who have worked on the Oxford English Dictionary. Lorien Kite tells some of the stories in \”The evolving role of the Oxford English Dictionary,\” which appeared in the Financial Times on November 15. For those not familiar with the OED, it not only aspires to include every word in the English language, whether in current use or archaic, but it also seeks to give examples of usage of words over time. The full article is worth reading, but here are a few snippets.

\”James Murray (1837-1915), the indefatigable editor who oversaw much of the first edition, was originally commissioned to produce a four-volume work within a decade; after five years, he had got as far as the word “ant”.\”

\”When work began on OED3 in the mid-1990s, it was meant to be complete by 2010. Today, they are roughly a third of the way through and Michael Proffitt, the new chief editor, estimates that the job won’t be finished for another 20 years.\”

\”The first edition, published in 10 instalments between 1884 and 1928, defined more than 400,000 words and phrases; by 1989, when two further supplements of 20th-century neologisms were combined with the original to create the second, this had risen to some 600,000, with a full word count of 59m. Once the monumental task of revising and updating that last (and possibly final) printed incarnation is complete, the third edition is expected to have doubled in overall length.\” 

\”The OED records 750,000 individual “sessions” each month, most of which come via institutions such as libraries, universities, NGOs and government departments. … The surprising thing, explains Judy Pearsall, editorial director for dictionaries in OUP’s global academic division, is that a quarter of these monthly visits are coming from outside what we think of as the English-speaking world.In September, the US accounted for the single biggest group of users, followed by the UK, Canada and Australia. At numbers five and six, however, are Germany and China. Readership from countries where English is not the first language is growing faster too …\”

Thanks to Larry Willmore at his \”Thought du Jour\” blog for pointing out the article. I\’m going to put it in \”Editor Hell\” file folder next to the example of Werner Stark, who edited the collected economic works of Jeremy Bentham. Here\’s my description of his task from an essay I wrote in 2009 called \”An Editor\’s Life at the Journal of Economic Perspectives.\” (If you are curious about my personal background and approach to editing, you might find it an interesting read.)

[C]onsider the problems posed in editing the papers of Jeremy Bentham, the utilitarian philosopher and occasional economist. Bentham wrote perhaps 15 pages in longhand almost every day ofhis adult life. His admirers gathered some of his work for publication, but much was simply stored in boxes, primarily at the library of University College, London. In 1941, an economist named Werner Stark was commissioned by the Royal Economic Society to prepare a comprehensive edition of Bentham’s economic writings, which in turn are just a portion of his overall writings. Inthe three-volume work that was published 11 years later (!), Stark (1952) wrote in the introduction: 

The work itself involved immense difficulties. Bentham’s handwriting is so bad that it is quite impossible to make anything of his scripts without first copying them out. I saw myself confronted with the necessity of copying no less than nine big boxes of papers comprising nearly 3,000 pages and a number of words that cannot be far from the seven-figure mark. But that was only the first step. The papers are in no kind of order: in fact it is hard to imagine how they ever became so utterly disordered. They resemble a pack of cards after it has been thoroughly shuffled. . . . The pages of some manuscripts, it is true, were numbered, but then they often carried a double and treble numeration so that confusion was worse confounded, and sometimes I wished there had been no pagination at all. In other manuscript collections the fact that sentences run uninterruptedly from one sheet onto another, is of material help in creating order out of chaos. I was denied even this assistance. It was one of Bentham’s idiosyncrasies never to begin a new page without beginning at the same time a new paragraph. But I cannot hope to give the reader an adequate idea of the problems that had to be overcome. 


Stark’s lamentations would chill the heart of any editor. “Bentham was most unprincipled with regard to the use of capitals.” “After careful consideration, it was found impossible to transfer the punctuation of Bentham’s manuscripts on to the printed page. When he has warmed to a subject and is writing quickly, he simply forgets to punctuate . . . ” And so on.

Of course, the task of editing can have some extraordinary payoffs. Making Bentham\’s thoughts available and accessible to readers is of great importance. One can imagine a future in which you will buy the OED as an app for your e-reader or your word-processor, and definitions and past uses will be only a click away. In a 2012 essay \”From the Desk of the Managing Editor,\” written for the 100th issue of the Journal of Economic Perspectives, I tried to describe some of what an editor can hope to accomplish:

Communication is hard. The connection between writer and reader is always tenuous. No article worth the reading will ever be a stroll down the promenade on a summer’s day. But most readers of academic articles are walking through swampy woods on a dark night, squelching through puddles and tripping over sticks, banging their shins into rocks, and struggling to see in dim light as thorny branches rake at their clothing. An editor can make the journey easier, so the reader need not dissipate time and attention overcoming unnecessary obstacles, but instead can focus on the intended pathway. 

Obstacles to understanding arise both in the form of content and argument and also in the nuts and bolts of writing. An editor needs a certain level of obsessiveness in confronting these issues, manuscript after manuscript, for the 1,000 pages that JEP publishes each year. Plotnick (1982, p. 1) writes in The Elements of Editing: “What kind of person makes a good editor? When hiring new staff, I look for such useful attributes as genius, charisma, adaptability, and disdain for high wages. I also look for signs of a neurotic trait called compulsiveness, which in one form is indispensable to editors, and in another, disabling.”

The ultimate goal of editing is to strengthen the connection between authors and readers. Barney Kilgore, who was editor of the Wall Street Journal during its time its circulation expanded dramatically in the 1950s and 1960s, used to post a motto in his office that would terrify any editor (as quoted in Crovitz 2009): “The easiest thing in the world for a reader to do is to stop reading.” An editor can help here, by serving as a proxy for future readers.

Shifting Components of the Dow Jones Industrial Index

The Dow Jones Industrial Index is based on stock prices of 30 large blue-chip companies that in some ill-defined way are supposed to represent the core of the U.S. economy. Over time, some companies are replaced by others. In September, for example, the formerly private investment bank Goldman Sachs replaced the public commercial bank Bank of America; the payments company Visa replaced the information technology company Hewlett-Packard; and the consumer clothing and gear company Nike Inc. replaced Alcoa Inc., which was traditionally an aluminum company but now has its finger in a various elements of design and manufacturing of parts, along with recycling. The changes seemed to me symptomatic of broader changes in the US economy, which made me look back at the companies in the Dow over time.

The first official Dow Jones Index was started in 1896, although Charles Dow had been putting out an earlier index, mainly of railroad stocks, as far back as the 1870s. Here, I\’ll just offer a few comparisons from more recent times. The companies in the Dow stayed unchanged from 1959 to 1976. The first column shows the list of Dow Jones index companies from that time period–call it roughly 40-50 years ago. The second column shows the companies in the Dow from 1994 to 1997, which is a little less than two decades ago. The third column shows the current list. These lists push me to think about how the US economy has been evolving.

As a starting point, compare the 1959-1976 list to the present. There are only six companies on both lists: AT&T (which is of course a vastly different company now than when it was the monopoly provider of U.S. telephone services), DuPont, General Electric, and Proctor and Gamble, Standard Oil (N.J) became Exxon and eventually ExxonMobil, and United Aircraft became United Technologies. A number of companies involving metal are out of the index: Aluminum Company of American, Allied Can, Anaconda Copper, International Nickel, and U.S. Steel, as are companies focused on commodities like American Can, International Paper, and Owens-Illinois Glass.

The new entries are tech companies like 3M, Cisco, IBM, Microsoft, United Technologies, as well as financial companies like American Express, Visa, JP Morgan, Goldman Sachs, and Travellers. Health care related companies, like Merck, Pfizer, and UnitedHealth Group are new entries. The face of  American retailing was Sears in the earlier list; now it is Wal-Mart and Home Depot. the face of food products was General Foods in the earlier list; now it\’s Coca Cola and McDonalds. the face of the oil industry was two Standard Oil companies (!) and Texaco in the earlier list; now it\’s ExxonMobil. International Harvester is off the list; Caterpillar is on.

The middle list is a snapshot of the transition between past and present. By my count, from the 1959-1976 up to the mid-1990s, about half of the 30 companies in the Dow (16 of 30) remained in some form, although several changed their names (Allied Chemical became AlliedSignal, Standard Oil (N.J) became Exxon, United Aircraft became United Technologies). Also, about half of the companies in the Dow in the mid-1990s (15 of 30) are no longer in the index at present. I don\’t claim to know what the \”right\” amount of turnover should be among top companies in a free-market society. But over the time frame of a few decades, the turnover is substantial.

Changes in America\’s Family Structure

When families get together for Thanksgiving and the holidays that follow, the structure of those families is different than a few decades ago. Jonathan Vespa, Jamie M. Lewis, and Rose M. Kreider of the U.S. Census Bureau provide some background in \”America’s Families and Living Arrangements: 2012\” (August 2013). In some ways, none of the trends are deeply surprising, but in other ways, the patterns of households set the stage for our political and economic choices.

As as starting point, here\’s a graph showing changes in households by type. Married households with children were 40.3% of all US households in 1970; in 2012, that share had fallen by more than half to 19.6%. Interestingly, the share of households that were married without children has stayed at about 30%.  Other Family Households, usually meaning single-parent families with children, has risen. Overall, the share of U.S. households that involve a family (either married or with children) was 81% back in 1970, but down to 66% in 2012. The share of households which are men or women living alone has risen. The figures are not the same across gender in part because of differences in older age brackets: \”Nearly three-quarters (72 percent) of men aged 65 and over
lived with their spouse compared with less than half (45 percent) of women.\”

The average number of people in households is falling. The share of households with five or more people has dropped by more than half, from 20.9% in 1979 to 9.6% in 2012. Meanwhile, 46% of households had 1 or 2 people in 1970, and 61% of households had 1 or 2 people in 2012. 
One of the hot topics in the last few years has been the subject of 20-somethings moving home to live in the parents\’ basement. This pattern is visible in the Census data, but it\’s less striking over time than I might have thought, and the pattern started before the onset of the recession. (The data in the early years of this figure come from different statistical surveys than the post-1983 data, so one shouldn\’t make too much of what appears to be a jump from 1980 to 1983.) My guess is that as the share of students enrolling in higher education has risen over time, some share of the rise in 18-24 year-olds living at home is represented by college students. 
Finally, it\’s worth noting that one-quarter of all US children are being raised in single-parent households. On average, these households are below-average in income, and with one parent at home, they are on average less able to provide hours of time at home. 
The structure of households shapes politics and economics. For example, a greater share of adults living alone means a shift in the housing supply away from big houses and toward apartments, and makes it more likely that these single-person households will locate in or near cities rather than in suburban houses. A smaller share of households with children means that when governments set priorities, support for schools will be lower. Households are also a way of sharing risk: a household with two adults has more possibilities for sharing the risk of job loss, or sharing the risk that time needs to be spent dealing with sickness or injury. Therefore, the growth in single-person households tends to mean increased support for social methods of sharing risk, including government programs that support unemployment insurance or health insurance.  To some extent, we are how we live. 

An Economist Chews Over Thankgiving

As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there\’s anything wrong with that.

The last time the U.S. Department of Agriculture did a detailed \”Overview of the U.S. Turkey Industry\” appears to be back in 2007. Some themes about the turkey market waddle out from that report on both the demand and supply sides.

On the demand side, the quantity of turkey per person consumed rose dramatically from the mid-1970s up to about 1990, but since then has declined somewhat. The figure below is from the Eatturkey.com website run by the National Turkey Federation. Apparently, the Classic Thanksgiving Dinner is becoming slightly less widespread.


On the production side, the National Turkey Federation explains: \”Turkey companies are vertically integrated, meaning they control or contract for all phases of production and processing – from breeding through delivery to retail.\” However, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which all the steps of turkey production have become separated and specialized–with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys.  Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:

\”In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg  capacity per hatchery in 2007.

Turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.

Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.

Turkeys have been carefully bred to become the efficient meat producers they are today. In 1986, a turkey weighed an average of 20.0 pounds. This average has increased to 28.2 pounds per bird in 2006. The increase in bird weight reflects an efficiency gain for growers of about 41 percent.\”

U.S. agriculture is full of examples of remarkable increases in yields over a few decades, but they always drop my jaw. I tend to think of a \”turkey\” as a product that doesn\’t have a lot of opportunity for technological development, but clearly I\’m wrong. Here\’s a graph showing the rise in size of turkeys over time.

The production of turkey remains an industry that is not very concentrated, with three relatively large producers and then more than a dozen mid-sized producers. Here\’s a list of top turkey producers in 2011 from the National Turkey Federation



For some reason, this entire post is reminding me of the old line that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. Did I mention that I make an excellent chestnut stuffing?  

Anyway, the starting point for measuring inflation is to define a relevant \”basket\” or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical U.S. household buys. But one can also define a more specific basket of goods if desired, and since 1986, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:

The cost of buying the Classic Thanksgiving Dinner fell about 1% in 2013, compared with 2012. The top line of the graph that follows shows the nominal price of purchasing the basket of goods for the Classic Thanksgiving Dinner. The lower line on the graph shows the price of the Classic Thanksgiving Dinner adjusted for the overall inflation rate in the economy. The line is relatively flat, especially since 1990 or so, which means that inflation in the Classic Thanksgiving Dinner has actually been a pretty good measure of the overall inflation rate.

Thanksgiving is my favorite holiday. Good food, good company, no presents–and all these good topics for conversation. What\’s not to like? 

(Note: This is an updated version of a post that was first published on Thanksgiving Day 2011.)

For those whose appetite for turkey-related economics is not yet satiated, I recommend that you turn next to the article in the New York Times last Sunday by Catherine Rampell, which tackles the puzzle of why the price of frozen turkeys tends to fall right before Thanksgiving, when one might expect demand to be highest. The article is here; a blog post with background information is here.

The Conundrum of EITC Overpayments (and the Health Insurance Exchanges)

Just to be up-front, I’m a long-time fan of the Earned Income Tax Credit. Other government programs to assist those with low incomes can easily discourage work. Imagine that every time a low-income person earns $1, they lose roughly $1 worth of benefits in welfare payments, food stamps, housing vouchers, maybe even losing their Medicaid health insurance. As a result, the incentive to work is dramatically reduced: for earlier posts with details on how this works in practice, see here and here. But with the EITC,  when low-income household (especially those with children) earn income, the federal government pays them an additional tax credit based on their earnings. Thus, the EITC is a reward for working, not a payment conditional on not working. Here’s a nice readable overview of the EITC , from the Center on Budget and Policy Priorities.

But the EITC does have a long-standing problem: a report from the Treasury Inspector General for Tax Administration estimates that about one-fifth of all EITC payments are made to those who don’t actually qualify for them. In 2012, the EITC had $11.6 to $13.6 billion in overpayments in 2012. Or if you want to look back over time, an accumulation of $110 billion to $132 billion over the decade from 2003-2012.

The report is “The Internal Revenue Service Is Not in Compliance With Executive Order 13520 to Reduce Improper Payments” [August 28, 2013, #2013-40-084]). President Obama signed Executive Order 13520 on November 20, 2009, which sought to increase the accountability of federal agencies to reduce improper payments. But as TIGTA reports, the IRS has not succeeded reducing these EITC overpayments. On the other side of the coin, TIGTA writes, “The IRS estimates that the participation rate for individuals who are eligible to receive the EITC is between 78 and 80 percent.”

Why is this problem so difficult? And if the federal government runs the EITC with a 20% error rate, what are the chances it can run the health care exchanges more effectively?

Here’s the EITC enforcement conundrum: The program is spending 20% or so of its funds on millions of households those who aren’t eligible, while not reaching millions of households who are eligible. But the amounts for any given household are small. The CBPP summary mentioned above writes: “During the 2010 tax year, the average EITC was $2,805 for a family with children and $262 for a family without children.” In addition, low-income people are moving on and off the program all the time. TIGTA writes: “Studies show that approximately one-third of EITC claimants each year are intermittent or first-time claimants.” It notes that some of the cases of EITC overpayment are just plain fraud, sometimes aided and abetted by those who prepare taxes. About two-thirds of tax applications that claim the EITC are filled out by outside preparers. But there are also lots of gray-area situations where the complexity of tax law and the EITC provisions, and general confusion makes it uncertain whether someone is eligible.

It won’t make economic sense for the IRS to hire a bunch of well-paid tax auditors to delve into the complex and incomplete financial records, home lives, and tax returns of millions of low-income families, hoping to recover a few hundred or even a couple of thousand dollars per family. Thus, the TIGTA report argues: “The IRS must develop alternatives to traditional compliance methods to significantly reduce EITC improper payments.” The IRS has made some efforts to communicate the law more clearly to paid tax preparers. But as TIGTA reports, “the IRS has made little improvement in reducing improper EITC payments as a whole …”

The federal government can do some large-scale programs pretty well. For example, it sends out Social Security checks in a cost-effective manner. With more of a paperwork struggle, it manages to cope with annual tax returns and Medicare and Medicaid payments to health care providers.

But the EITC is a program that involves complex rules for eligibility and size of payments, much more complex than Social Security. The EITC is aimed at low-income people, many of whom have economic and personal lives that are in considerable flux and many of whom have limited ability to deal with detailed paperwork, unlike the health care providers who receive Medicare and Medicaid payments. The envisioned health insurance exchanges are likely to end up serving many more people than the EITC. The complexity of decisions about buying health insurance is greater than the complexity of qualifying for cash payments from the EITC. When people’s eligibility for subsidies is moving and changing each year, as it is for the EITC and will also be for the health insurance exchanges, it will be difficult for the federal government to sort out eligibility. And as the complexity of the rules rises, it will spawn a network of people to help in filling out the forms, most of whom will be honest and forthright, but some of whom will focused on making people eligible for the largest possible subsidies, with little concern for legal qualifications.

I remain a fan of the EITC, but I confess that as a matter of practical administration, I’m not at all sure of how to substantially reduce its persistent problem of overpayments in a cost-effective manner. Part of the answer probably involves finding ways to simplify the rules and the interface with recipients, so that eligibility can be more clear-cut. In addition, I suspect that practical problems that cause a mix of over- and underpayments for the EITC will be dwarfed by the practical problems and error rates in deciding about eligibility for subsidies in the health insurance exchanges–even if or when the web interface itself becomes functional.

An Okun\’s Law Refresher

Okun\’s law–which is really more of a rule of thumb–holds that for each increase of one percentage point in real GDP, the unemployment rate would fall by 0.3 percentage points. Arthur Okun formulated this rule in a 1962 research paper, called “Potential GNP: Its Measurement and Significance,” which appeared in the Proceedings of the Business and Economics Statistics Section of the American Statistical Association (pp. 98-103). It\’s available as a Cowles Foundation working paper here. Michael T. Owyang, Tatevik Sekhposyan and E. Katarina Vermann take a look at the current state of Okun\’s law as the U.S. economy struggles with sluggish growth and a frustratingly gradual decline in its unemployment in \”Output and Unemployment How Do They Relate Today?\” which appears in the October 2013 issue of the Regional Economist, published by the Federal Reserve Bank of St. Louis. They argue that Okun\’s law has actually held up quite well over time.

Consider first the evidence roughly as Okun would have seen it back in the early 1960s. This figure shows the  quarterly change in the unemployment rate and the quarterly growth rate in output from 1948-1960. Economic data series like GDP are updated over time, so this isn\’t quite the same data that Okun used. But it\’s close. Notice that the main pattern of the points is a downward slope: that is, a negative growth rates of GDP is correlated with a rise in unemployment, while a rise in GDP is correlated with a fall in unemployment.

Now here is the same graph, but including all the quarters from 1948 up into 2013. Time periods are distinguished by the shape of the points: 1948-1960 is blue squares, 1961-2007 is black dots,  and 2008-2013 is red triangles. They estimate that the Okun\’s law relationship over this time is that, on average, a 1 percentage point rise in the growth rate of real GDP is associated with a 0.28 percentage point fall in the unemployment rate–almost exactly the same as what Okun found in 1962.

The Triffin Dilemma and U.S. Trade Deficits

Martin Feldstein interviews Paul Volcker in the most recent issue of the Journal of Economic Perspectives. The interview ranges from the 1970s up to the present, and I commend it to you in full. But one of Volcker\’s comments in particular sent me scurrying to learn more. At one point in the interview, Volcker says: \” I think we’re back, in a way, in the Triffin dilemma.\” And I thought to myself: \”What the heck is that?\”

Here\’s an discussion of the Triffin dilemma from an IMF website describing problems in the international monetary system from 1959 to 1971, before the Bretton Woods system of (mostly) fixed exchange rates cratered. The IMF states:

If the United States stopped running balance of payments deficits, the international community would lose its largest source of additions to reserves. The resulting shortage of liquidity could pull the world economy into a contractionary spiral, leading to instability. … If U.S. deficits continued, a steady stream of dollars would continue to fuel world economic growth. However, excessive U.S. deficits (dollar glut) would erode confidence in the value of the U.S. dollar. Without confidence in the dollar, it would no longer be accepted as the world\’s reserve currency. The fixed exchange rate system could break down, leading to instability.\”

In short, the U.S. needs to run trade deficits, because the rest of the world wants to  hold U.S. dollars as a safe asset, and the way the rest of the world gets U.S. dollars is when the U.S. economy buys imported products. However, if U.S. trade deficits go on for too long or become too large, then the U.S. dollar might stop looking like a safe asset, which in turn could bring its own waver of global financial disruption. Here\’s how Volcker describes the Triffin dilemma in the interview:

\”In the 1960s, we were in a position in the Bretton Woods system with the other countries wanting to run surpluses and build their reserve positions, so the reserve position of the United States inevitably weakened—weakened to the point where we no longer could support the convertibility of currencies to gold. Now, how long can we expect as a country or world to support how many trillions of dollars that the rest of the world has? So far, so good. The rest of the world isn’t in a very good shape, so we look pretty good at the moment. But suppose that situation changes and we’re running big [trade] deficits, and however many trillion it is now, it’s another few trillions. At some point there is vulnerability there, I think, for the system, not just for the United States. We ought to be conscious of that and do something about it.\”

Triffin laid out his views in a 1960 book called \”Gold and the dollar crisis”, which is not all that easily available. But his 1960 Congressional testimony available here offers an overview. However, the Triffin dilemma as it exists today, in a world of flexible exchange rates, is not the same as it was back in 1960. (Of course, this is also why Volcker said that \”in a way,\” we had returned to the Triffin dilemma.) Lorenzo Bini Smaghi, a member of the Executive Board of the European Central Bank, laid out some of the similarities and differences in 2011 in a talk, \”The Triffin dilemma revisited.\”

Smaghi notes the arrival of flexible exchange rates, but argues that the biggest change affecting the Triffin dilemma is something else, the arrival of multi-directional capital flows and private sector capital flows in the global financial economy means that there are now lots of ways for other countries to get the U.S. dollars (or euros) that they wish to hold a safe asset, without the U.S. (or the euro area) necessarily running a trade deficit. Smaghi said:

Today, the United States and the euro area are not obliged to run rising current account deficits to meet the demand for dollars or euros. This is for two main, interlinked reasons. First, well-functioning, more liquid and deeply integrated global financial markets enable reserve-issuing countries to provide the rest of the world with safe and liquid financial liabilities while investing a corresponding amount in a wide range of financial assets abroad. The euro has indeed become an important international currency since its inception and the euro area has been running a balanced current account. In a world where there is no longer a one-to-one link between current accounts, i.e. net capital flows and global liquidity, a proper understanding of global liquidity also needs to include gross capital flows. Second, under BW [Bretton Woods] global liquidity and official liquidity were basically the same thing, but today the “ease of financing” at global level also crucially depends on private liquidity directly provided by financial institutions, for instance through interbank lending or market making in securities markets. Given the endogenous character of such private liquidity, global official and private liquidity have to be assessed together for a proper evaluation of global liquidity conditions at some point in time, and there is no endemic shortage of global liquidity, as the empirical evidence confirms. This is not to deny that temporary shortages can occur, as happened after the bankruptcy of Lehman Brothers in September 2008. But such shortages are a by-product of shocks and boom-bust cycles, not an intrinsic feature of the IMS [international monetary system], and can be tackled with an appropriate global financial safety net.

So has the Triffin dilemma been eliminated. Smaghi argues not. He points out that there is a growing and widespread demand for U.S. dollar holdings around the world from emerging market economies that want to hold U.S. dollar reserves as a hedge against sudden capital outflows or to keep their own currency undervalued, as well as from oil and other commodity exporters who prefer to hold their accumulated trade surpluses in the form of U.S. dollars. And when economic shocks occur, demand for these safe U.S. dollar assets can rise and fall in ways that threaten economic stability.
Triffin\’s policy solution, back in the day, was the creation of a global central bank that would issue a new \”reserve unit\” which central banks could hold as a safe asset, but would not be linked to gold or currencies. While Volcker doesn\’t endorse that approach, he also says that the way out lies in international monetary reform.  Smaghi discusses the possibility that the Triffin dilemma might be much reduced in a multi-polar international monetary system, where the U.S. dollar remains quite prominent, but those seeking safe assets and can turn to a variety of other currencies, too.

This discussion may seem abstruse, so let me pull it back to some headline economic statistics. There was once a time, not so very long ago, when many people used to worry about what would happen if the U.S. economy ran sustained and large trade deficits.  Through most of the 1960s and 1970s, the U.S. was fairly close to balanced trade. The first big drop in the U.S. trade deficit happened in the mid-1980s, but then trade deficits since the late 1990s have been just as big, or bigger, than back in the 1980s. Here\’s a figure showing U.S. trade deficits as a share of GDP since the time Triffin enunciated his dilemma up to the present.

I suspect that the U.S. trade deficits in recent decades would have seemed almost impossibly large to Triffin and others back in 1960. But in recent decades, he rest of the world has wanted to hold trillions of U.S. dollars as a safe asset, and so the U.S. economy could import to its heart\’s content, send the U.S. dollars abroad and run enormous trade deficits. But at some point, the accumulation of U.S. trade deficits becomes so sustained and large that it will lead to economic disruption. As Paul Volcker says: \”We ought to be conscious of that and do something about it.\”

(Thanks to Iva Djurovic for creating the trade deficit figure.)


The State of US Health

It\’s fairly well-known that life expectancy for Americans are below other high-income countries. But did you know that that the gap is getting worse? And how the underlying causes of death in the U.S., together with proximate factors behind those causes, have been evolving? The Institute for Health Metrics and Evaluation at the University of Washington takes up these issues and more in \”The State of US Health: Innovations, Insights, and Recommendations from the Global Burden of Disease Study.\”

Let\’s start with some international comparisons. Life expectancy in the US is rising–but it is rising more slowly than life expectancy in other OECD countries.

Or consider the average of death. In the U.S., the average age of death rose by nine yeas from 1970 to 2010, but it has risen faster most other places. In this figure, the vertical axis shows age at death in 1970, so if you look at countries from top to bottom, you see how they were ranked by life expectancy in 1970. The horizontal axis shows age at death in 2010, so if you look at countries from right to left, you see how they are ranked by life expectancy in 2010. Countries at about the same horizontal level as the US like New Zealand, Canada, Australia, Spain, Italy, and Japan all had similar life expectancy to the U.S. in 1970, but are now out to the right of the US with higher life expectancies in 2010.

What are the causes of early death? This study seeks to rank different causes of death according to \”years of life lost\”–that is, a cause of death that affects people at younger ages is counted more heavily than a cause of death which affects people at older ages. Here\’s the comparison from 1990 to 2010. Notice that the top six causes of years of life lost change their order a bit, but are otherwise unchanged: heart disease, lung cancer, stroke, chronic obstructive pulmonary disease (COPD), road injury and self-harm. But after that, there are some dramatic changes. For example, the years of life lost because of HIV/AIDS, interpersonal violence, and pre-term birth complications are ranked lower. However, cirrhosis, diabetes, Alzheimer\’s disease, and drug use disorders now rank much higher.

The study also looks at what is called a \”disability-adjusted life year\” or a DALY. This adds together years of life lost and also makes an adjustment for years lived with a disability. Looking at all the causes of death, the study calculates what percentage of the DALYs are attributable to various \”risk factors.\” Clearly, the big risk factors are dietary risks (which largely seems to mean not eating enough fruits, nuts and seeds, vegetables, and whole grains, while eating too much salt and processed meat), together with tobacco use, obesity, high blood pressure, and low physical activity. Clearly, many of these issues of diet, weight, and exercise interact.

The report also offers some county-level analysis across the US, with a reminder that \”how health is experienced in the US varies greatly by locale. People who live in San Francisco or Fairfax County, Virginia, or Gunnison, Colorado, are enjoying some of the best life expectancies in the world. In some US counties, however, life expectancies are on par with countries in North Africa and Southeast Asia.\”

As the US political system convulses over what legal standards that will govern health insurance, the mechanics of paying for health care are obviously of importance. But at the broad social level, life expectancy and health are first and foremost about diet, exercise, not smoking, not drinking to excess, and other behaviors.

A Tripartite Mandate for Central Banks?

The U.S. Federal Reserve operates under what is commonly called a \”dual mandate,\” which basically means that it should take both inflation and unemployment into account when making its decisions. The dual mandate means that the Fed does not take into account the risk that asset market bubbles are destabilizing the economy. Should the dual mandate be turned into a tripartite mandate, with the risk of a financial crisis as the third factor that the Fed (and other central banks) should also take into account. In the most recent issue of the Journal of Economic Perspectives, Ricardo Reis discusses \”Central Bank Design,\” and in particular what the economics literature has to say about a range of issues related to how central banks should choose their goals, their decision-makers, their policy tools, their communication methods, and more. Reis explains why economists have typically not supported a treble mandate in the past, and also why he thinks that this may change in the not-too-distant future.

Why have central banks typically not focused on asset market bubbles and financial instability in the past? Here\’s how Reis explains it (as usual, citations and footnotes are omitted):

\”A more contentious debate is whether to have a tripartite mandate that also includes financial stability. After all, the two largest US recessions in the last century—the Great Depression of the 1930s and the Great Recession that started in 2007—were associated with financial crises. … [I]f financial stability is to be included as a separate goal for the central bank, it must pass certain tests: 1) there must be a measurable definition of financial stability, 2) there has to be a convincing case that monetary policy can achieve the target of bringing about a more stable financial system, and 3) financial stability must pose a trade-off with the other two goals, creating situations where
prices and activity are stable but financial instability justifies a change in policy … Older approaches to this question did not fulfill these three criteria, and thus did not justify treating financial stability as a separate criterion for monetary policy.\” 

As a recent example, think back to the dot-com bubble of the late 1990s. Sure, the price tech stocks seemed to be soaring implausibly high. But was it really the role of the Federal Reserve to decide that stock prices were \”too high,\” and to change monetary policy, perhaps bringing on a recession, in an effort to bring stock prices down? As Reis notes: \”Yet, at most dates, there seems to be someone crying “bubble” at one financial market or another, and the central bank does not seem particularly well equipped to either spot the fires in specific asset markets, nor to steer equity
prices.\”

When the dot-com boom was followed by a short and shallow recession in 2001, the Federal Reserve did what it could to cushion the economy with lower interest rates at that time. I\’m probably not alone in being someone who watched the housing price boom around 2006 and thought: \”Sure, there might be a recession eventually, like in 2001, but it works OK to have the Federal Reserve react after the fact when unemployment goes up. The Fed isn\’t well-equipped to judge when housing prices or stock prices are \”too high\” or \”too low,\” nor to adjust monetary policy to alter such prices.

But as Reis points out, more sophisticated ideas of how to define the rising risk of a financial crisis have come into prominence in the last few years. Instead of focusing on whether stock prices or house prices are rising, or seem \”too high,\” these approaches look at factors like whether total borrowing is rising. Reis explains:

 \”A more promising modern approach begins with thinking about how to define financial stability: for example, in terms of the build-up of leverage, or the spread between certain key borrowing and lending rates, or the fragility of the funding of financial intermediaries. This literature has also started gathering evidence that when the central bank changes interest rates, reserves, or the assets it buys, it can have a significant effect on the composition of the balance sheets of financial intermediaries as well as on the risks that they choose to take. … While it is not quite there yet, this modern approach to financial stability promises to be able to deliver a concrete recommendation for a third mandate for monetary policy that can be quantified and implemented.\” 

A final point here is that implementing a tripartite mandate may also mean changing how one describes the policy tools of a central bank does in a different way. Up to 2007, it was reasonable to describe the policy tools of a central bank mainly in terms of its ability to raise or lower interest rates. But when the central bank starts looking at the total amount of bank credit being extended or at stress-testing whether financial institutions are well-positioned to be resilient in the face of an shock, these sorts of goals can also be accomplished by so-called \”macroprudential regulation,\” which involves adjusting credit conditions by adjusting the rules and standards to be applied by financial regulators.

Is Altruism a Scarce Resource that Needs Conserving?

The Harvard political philosopher Michael Sandel offers a thought-provoking essay, \”Market Reasoning as Moral Reasoning: Why Economists Should Re-engage with Political Philosophy,\” in the just-released Fall 2013 issue of the Journal of Economic Perspectives. Like all JEP articles, it is freely available on-line courtesy of the American Economic Association. (Full disclosure: I\’ve been the Managing Editor of the JEP since its first issue in 1987.) The article makes a number of arguments about the extent to which \”putting a price on every human activity erodes certain moral and civic goods worth caring about.\” Here, I\’ll focus on one argument presented late in the paper, which is the claim that it is a good thing to let markets based on self-interest function in many areas, because it conserves on scarce resources of altruism.

Sandel makes a persuasive case that a number of economists hold this view, although it is not always stated openly. Here are a few examples. The eminent British economist Sir Dennis Robertson gave a prominent 1954 lecture on the topic \”What does the economist economize?\” Here is Sandel\’s discussion:

Robertson (1954) claimed that by promoting policies that rely, whenever possible, on self-interest rather than altruism or moral considerations, the economist saves society from squandering its scarce supply of virtue. “If we economists do [our] business well,” Robertson (p. 154) concluded, “we can, I believe, contribute mightily to the economizing . . . of that scarce resource Love,” the “most precious thing in the world.\”

Kenneth Arrow made a similar argument in a 1972 essay, Sandel notes:

“Like many economists,” Arrow (1972, pp. 354–55) writes, “I do not want to rely too heavily on substituting ethics for self-interest. I think it best on the whole that the requirement of ethical behavior be confined to those circumstances where the price system breaks down . . . We do not wish to use up recklessly the scarce resources of altruistic motivation.”

Or for another example, here\’s Sandel describing a speech that Larry Summers gave at Harvard\’s Memorial Church in 2003:

Summers (2003) concluded with a reply to those who criticize markets for relying on selfishness and greed: “We all have only so much altruism in us. Economists like me think of altruism as a valuable and rare good that needs conserving. Far better to conserve it by designing a system in which people’s wants will be satisfied by individuals being selfish, and saving that altruism for our families, our friends, and the many social problems in this world that markets cannot solve.\”

As Sandel notes with some asperity, this notion of altruism as a scarce resource, \”like the supply of fossil fuels,\” is highly contestable. Might it not be possible instead that when people act in a way that displays altruisism, generosity, or civic virtue, that the social supply of these virtues tend to expand? It seems plausible to think that altruism, generosity, and even love may be socially created, not just used up.

At one level, Sandel\’s critique seems to me fair and well-made. There is actually a reasonable-sized literature in economics and other social sciences looking at how the level of trust and generosity varies across societies. It seems incorrect to think of altruism as a fixed quantity, unaffected by other social institutions.

But at some other level, I feel moved to defend my economist brethren a bit. Focusing on whether whether altruism is a fixed can be a debater\’s point that focuses on the specific phrasing of an argument, rather than the underlying issue at stake. After all, none of the economists are arguing that altruism, generosity, and social virtue are bad ideas or that we should have less of them. Instead, they are arguing that in the real world there exists a division of labor, if you will, in which real-world people choose between altruism and self-interest in different settings. They are arguing that in practical terms, it seems unlikely that social norms of private altruism and generosity by themselves be able to achieve important social goals like supplying food, housing, health care, education, and the necessities of live, as well as helping the poor or protecting the environment. In such cases, it will be important to consider the interactions of self-interest with the compulsion of law.

Sandel focuses on the arguments for how market forces might impinge on civic virtues worth preserving, which is plenty for one essay. But there is also potential for conflict between civic virtues and any institution of society, at least in certain settings. For example, governments around the world can also easily impinge on civic virtues worth preserving. I do think the issues concerning potential conflicts between market forces and moral virtues are real ones. In the style of eminent philosophers, Sandel poses his argument about  not as a set of strong claims, but rather as a set of questions for consideration, and I commend his article to your consideration in a similar spirit.