Ultra-Low Interest Rates: Who Wins? Who Loses?

Most of the commentary on the ultra-low interest rate policies that have been pursued over the last five years or so by the Federal Reserve and other central banks has focused on whether they were useful in limiting the length and depth of the Great Recession, and whether or how long they should be continued. In their recent discussion paper for the McKinsey Global Institute, Richard Dobbs, Susan Lund, Tim Koller, and Ari Shwayder acknowledge and accept the conventional wisdom that the ultra-low interest rate policies were useful and appropriate as part of the effort to stave off the Great Recession, but that there is some controversy over continuing the policies. But in \”QE and ultra-low interest rates: Distributional effects and risks,\” they then tackle a different if related question: Who has won and who has lost from the ultra-low interest rates? Although their analysis is international, I\’ll focus mainly on the U.S. results here.

Ultra-low interest rates will have two main sets of distributional effects: the first set involve interest payments made or received; the second set involve how interest rates affect the level of asset prices like homes and bonds. Here\’s a figure looking at how ultra-low interest rates have affected interest income and payments from 2007-2012.

Of course, lower interest rates help borrowers pay less, while those who are receiving interest payments get less. Thus, the big winner from ultra-low interest rates is the U.S. government, which over the 2007-2012 period could owe $900 billion less in interest payments. Indeed, the McKinsey report also notes that central banks like the Federal Reserve have been buying assets as part of the \”quantitative easing\” policies in recent years, and funds earned by the Fed over and above operating expenses go to the U.S. Treasury. They estimate that the quantitative easing policies gained the U.S. government another $145 billion or so during this time period. So overall, the ultra-low interest rate policies have been worth about $1 trillion to the U.S. government.

Nonfinancial U.S. corporations have interest-bearing debt in the form of bonds and bank loans, so the low interest rate policies have been worth $310 billion to them. U.S. banks have also seen a rise in their net interest income–that is, the amount by which the interest they received from borrowers exceeded the interest they paid to depositors. (In contrast, banks in Europe as a group have been worse off as a result of the ultra-low interest rate policies.)

On the other side, those who were depending on receiving interest payments are worse off. For example, insurance and pension funds that were relying on interest payments for part of their returns are down $270 billion from 2007-2012. As the report points out, many of these companies hold bonds that they purchased before interest rates fell, and so they have been somewhat protected from the fall in  interest rates. But as the period of ultra-low interest rates continues, insurance companies will either need to shift toward purchasing higher-risk products in search of higher returns, or they may become insolvent.

Household that were relying on interest payments also suffered.  However, because younger households tend to be borrowers, while older households are more likely to be relying on interest income, these losses fall heavily on older households. They also fall heavily on households that have high levels of wealth–in particular, on the 10% or so of US households that have 90% of the financial wealth.

Finally, the rest of the world holds large amounts of U.S. dollar debt: for example, think of China\’s $3.7 trillion in U.S. dollar reserves. The rest of the world has received about $480 billion less as a result of the ultra-low interest rate policies from the U.S. Federal Reserve during 2007-2012.

Now shift over to thinking about the effect of ultra-low interest rates on asset prices. The McKinsey report estimates that as a result of the ultra-low interest rates, U.S. housing prices are about 15% higher than they would otherwise have been (although this estimate is not intended to be precise!) and value of fixed-income bonds is about 37% higher. (If a bond was issued at some point in the past when interest rates were higher, and now interest rates fall, then the bond is worth more as a result.) The report argues that the effect of lower interest rates on stock prices is minimal. But the first two effects mean that household wealth is up about $5.6 trillion as result of ultra-low interest rates. Of course, these gains are experienced either by those who own a house or by those who own bonds–which again would be the 10% or so of all households that hold 90% of the financial wealth. It\’s a little tricky to think about these gains in asset values, because presumably at some point when interest rates return to more normal levels, these gains from ultra-low interest rates will fade away.

These distributional effects of ultra-low interest rates may well be less important than the macroeconomic issues of using the low rates to limit the economic carnage of the Great Recession. But the distributional effects are surely large enough to deserve notice. The big gainers are the U.S. government and nonfinancial corporations. The big losers are those trying to save for the future: older households, pension funds, life insurance companies. Other countries around the world have gotten hit in two ways:  not just much lower interest payments than they expected, and also potentially unstable inflows of U.S. investment dollars. When U.S. interest rates are rock-bottom, U.S. dollar investment funds flow into smaller economies around the world seeking higher returns; when it seems as if U.S. interest rates might rise, these U.S. dollar investment flows can easily flee back to the U.S. economy, destabilizing the capital markets and exchange rates of the smaller economies.

The Virtues of Market Behavior

A standard line in economics, which I\’ve certainly emphasized often enough, is the remarkable ability of the social institution of markets to transmute self-interested behavior into social welfare. When firms are seeking a profit, they try to provide a combination of price and quality that appeals to customers. When people work at a job, they try to provide the combination of effort and skill that will result a certain mix of wages and work conditions. When customers shop for the best deal, they provide an incentive for firms and workers to act in these ways. The result of these interacting forces is a set of incentives that translate into improved standard of living. Of course, the narrow pursuit of self-interest can also lead to connivance, fraud, crime, violence, war, and politics. Jack Hirshleifer made this case in a memorable 1993 speech entitled \”The Dark Side of the Force,\” in which he argued that economists were too sunny in their view of self-interest, and needed to look more closely at both sides.

But as economists have quarreled over how society might best shape and direct the force of self-interest, they have opened themselves to an attack from the philosophers who argue that rather than assuming that people are motivated by narrow self-interest, why don\’t we seek a world in which people are motivated by virtuous behavior? Luigino Bruni and Robert Sugden seek to counter this critique in \”Reclaiming Virtue Ethics for Economics,\” which appears in the Fall 2013 issue of the Journal of Economic Perspectives. (Full disclosure: All articles in the JEP are freely available online courtesy of the American Economic Association. I\’ve been Managing Editor of the JEP since its inception in 1987.)

Bruni and Sugden point out that the critique that economic behavior is instrumental, rather than virtuous in itself, goes back a long way. For example, they quote Aristotle’s Nicomachean Ethics:
“The life of money-making is one undertaken under compulsion, and wealth is evidently not the good we are seeking; for it is merely useful and for the sake of something else.” They sketch the views of modern philosophers who also argue that economic behavior lacks virtue because it is not an end in itself, but instead is an activity that is in some sense socially compelled and performed for the sake of something else. As Bruni and Sugden note, responding to this argument by saying that markets raise the standard of living would miss the point.

Instead, Bruni and Sugden seek to confront this argument about virtue and market behavior head-on. They point out that virtue is typically defined in the context in which a person operates. Thus, even if a soldier or a doctor earns a paycheck, the virtues of their professions lie in courage or in healing. Of course, describing virtues in this way does not mean that all soldiers or all doctors are are virtuous!

Bruni and Sugden then argue that market behavior contains the possibility of intrinsic virtue as well, which lies in the action of participating in an activity in which mutual gains are realized. They write: \”But economic freedom is not the freedom of each person to get what he wants tout court; it is his freedom to use his own possessions and talents as he sees fit and to trade with whoever is willing
to trade with him. We suggest that the common core of these understandings of markets is that
markets facilitate mutually beneficial voluntary transactions.  … [A] market virtue in the sense of virtue ethics is an acquired character trait with two properties: possession of the trait makes an individual better able to play a part in the creation of mutual benefit through market transactions; and the trait expresses an intentional orientation towards and a respect for mutual benefit.\” With that perspective in mind, here is the list virtues that they see in market behavior:

Universality. \”If the market is to be viewed as an institution that promotes the widest possible
network of mutually beneficial transactions, universality has to be seen as a virtue. Its opposites—favoritism, familialism, patronage, protectionism—are all barriers to the extension of the market.\”

Enterprise and Alertness. \”[E]nterprise in seeking out mutual benefit must be a virtue. Discovering and anticipating what other people want and are willing to pay for is a crucial component of entrepreneurship. … The virtue of alertness to mutual benefit applies to both sides of the market: for mutual benefit to be created, the alertness of a seller has to engage with the alertness of a buyer. Thus, the inclination to shop around, to compare prices, and to experiment with new products and new suppliers must be a virtue for consumers.\”

Respect for the Tastes of One\’s Trading Partners. \”The spirit of this virtue is encapsulated in the business maxim that the customer is always right. This virtue is closely related to the idea that market transactions are made on terms of equality, and opposed to the paternalistic idea that the relationship of supplier to customer is that of guardian to ward.\”

Trust and Trustworthiness. Because the monitoring and enforcement of contracts is often difficult or costly, dispositions of trust and trustworthiness (qualified by due caution against being exploited by the untrustworthy) facilitate the achievement of mutual benefit in markets. If that is right, these dispositions must be market virtues.\”

Acceptance of Competition. \”[A] virtuous trader will not obstruct other parties from pursuing mutual benefit in transactions with one another, even if that trader would prefer to transact with one or another of them instead.\”

Self-Help. \”Thus, it is a market virtue to accept without complaint that others will be motivated to satisfy your wants, or to provide you with opportunities for self-realization, only if you offer something that they are willing to accept in return. … Seeing self-help as a virtue makes it easier to understand how people can find satisfaction in work that they would not choose to do if they were not paid for it.\”

Non-Rivalry. \”Thus, it must be a market virtue to see others as potential partners in mutually beneficial transactions rather than as rivals in a competition for shares of a fixed stock of wealth or status. A disposition to be grudging or envious of other people’s gains is a handicap to the discovery
and carrying through of mutually beneficial transactions. The corresponding virtue is that of being able to take pleasure in other people’s gains—particularly those that have been created in transactions from which you have gained too.\”

Stoicism about Reward. \”But an adequate account of market virtue cannot maintain that what a person earns from market transactions is a reward for the exercise of virtue, in the sense that a literary prize can be seen as a reward for artistic excellence. A person can expect to benefit from market transactions only to the extent that she provides benefits that trading partners value at the time they choose to pay for them. To expect more is to create barriers to the achievement of mutual benefit. Thus, market virtue is associated with not expecting to be rewarded according to one’s deserts, not resenting other people’s undeserved rewards, and (if one has been fortunate) recognizing that one’s own rewards may not have been deserved.\”

Again, just to be clear, Bruni and Sugden are certainly not claiming that everyone who participates in markets is virtuous in these ways. They are also certainly not claiming that those who are most virtuous in these ways will accumulate the highest riches.

What Bruni and Sugden are trying to do, it seems to me, is to point out the possibilities of virtue in the everyday lives that most of us lead. It\’s easy to talk about virtue in the context of those spend their lives working in a leper colony, or creating great art, or educating impoverished children. But some of the philosophers who criticize economic behavior for its inordinate focus on self-interest and lack of virtuous behavior seem to say fairly explicitly that the everyday life of, say, a bricklayer or a factory worker or a file clerk must necessarily lack even the opportunity for virtuous behavior, because their efforts to make a living are without a possibility of intrinsic merit. Indeed, the argument that economic behavior cannot be virtuous seems to shade into a claim that only those who don\’t need to work for a living can be virtuous. In contrast, Bruni and Sugden argue that market behavior of everyday life has its virtues worth defending, too.

For a contrasting argument that expresses concerns about how markets may infringe on other important social values, the same issue of JEP has an article by Michael J. Sandel called \”Market Reasoning as Moral Reasoning: Why Economists Should Re-engage with Political Philosophy.\” I posted on one aspect of Sandel\’s argument a couple of weeks ago in \”Is Altruism a Scarce Resource that Needs Conserving?\”

Bhagwati on Doha Lite and Decaffeinated

Jagdish Bhagwati gives his perspective on the state of the Doha Round of world trade talks in \”Dawn of a New System,\” which appears in the December 2013 issue of Finance & Development. It\’s useful reading as background for the Ninth Ministerial Conference conference of the World Trade Organization that starts tomorrow in Bali.

As Bhagwati looks back at the status of the Doha Round of trade talks that started back in 2001, he considers three options. One choice, which he calls Doha Heavy, represents the grand bargain in which nations all around the world make substantial movements toward reducing trade barriers and government subsidies. This choice isn\’t happening. But in 2011, a more modest trade liberaliztion deal was on the table that Bhagwati calls Doha Lite. In Bhagwati\’s telling, this deal was acceptable to developing countries around the world, as well as to David Cameron in the UK and Angela Merkel in Germany, but \”Obama was unwilling to confront the U.S. business lobbies, which held out for major new concessions by the bigger developing economies.\” 

So the choice that remains is what Bhagwati calls Doha Lite and Decaffeinated, which will be discussed starting tomorrow at the conference in Bali. Basically, this agreement would on the \”trade facilitation\” agenda, which according to an OECD study would emphasize issues like \”the availability of trade-related information, the simplification and harmonization of documents, the streamlining of procedures and the use of automated processes.\” Indeed, the OECD maintains a website of 16 trade facilitation indicators. The OECD writes: \”[C]omprehensive implementation of all measures currently being negotiated in the World Trade Organization’s Doha Development Round would reduce total trade costs by 10% in advanced economies and by 13-15.5% in developing countries. Reducing global trade costs by 1% would increase worldwide income by more than USD $40 billion, most of which would accrue in developing countries.\”

In the context of the world economy, these gains aren\’t very large–but they are still worth having. Also, as Bhagwati points out, even a Doha Light and Decaffeinated agreement would help support the other aspects of the multilateral trade agreement: setting rules on issues like antidumping and government subsidies, and the dispute settlement mechanism. Bhagwati has characteristically sharp observations to make about the Trans-Pacific Partnership and the Transatlantic Trade and Investment Partnerships that are also under negotiation. But like many trade economists, Bhagwati fears that a proliferation of regional trade agreements won\’t work well as global supply chains become ever-longer and emerging economies like China, Brazil, and India play an ever-larger role in the world economy.

Editor Hell

At the end of a long day at my job as the Managing Editor of the Journal of Economic Perspectives, it\’s always pleasant to consider those editors whose lives are harder than my own. 

Consider the editors who have worked on the Oxford English Dictionary. Lorien Kite tells some of the stories in \”The evolving role of the Oxford English Dictionary,\” which appeared in the Financial Times on November 15. For those not familiar with the OED, it not only aspires to include every word in the English language, whether in current use or archaic, but it also seeks to give examples of usage of words over time. The full article is worth reading, but here are a few snippets.

\”James Murray (1837-1915), the indefatigable editor who oversaw much of the first edition, was originally commissioned to produce a four-volume work within a decade; after five years, he had got as far as the word “ant”.\”

\”When work began on OED3 in the mid-1990s, it was meant to be complete by 2010. Today, they are roughly a third of the way through and Michael Proffitt, the new chief editor, estimates that the job won’t be finished for another 20 years.\”

\”The first edition, published in 10 instalments between 1884 and 1928, defined more than 400,000 words and phrases; by 1989, when two further supplements of 20th-century neologisms were combined with the original to create the second, this had risen to some 600,000, with a full word count of 59m. Once the monumental task of revising and updating that last (and possibly final) printed incarnation is complete, the third edition is expected to have doubled in overall length.\” 

\”The OED records 750,000 individual “sessions” each month, most of which come via institutions such as libraries, universities, NGOs and government departments. … The surprising thing, explains Judy Pearsall, editorial director for dictionaries in OUP’s global academic division, is that a quarter of these monthly visits are coming from outside what we think of as the English-speaking world.In September, the US accounted for the single biggest group of users, followed by the UK, Canada and Australia. At numbers five and six, however, are Germany and China. Readership from countries where English is not the first language is growing faster too …\”

Thanks to Larry Willmore at his \”Thought du Jour\” blog for pointing out the article. I\’m going to put it in \”Editor Hell\” file folder next to the example of Werner Stark, who edited the collected economic works of Jeremy Bentham. Here\’s my description of his task from an essay I wrote in 2009 called \”An Editor\’s Life at the Journal of Economic Perspectives.\” (If you are curious about my personal background and approach to editing, you might find it an interesting read.)

[C]onsider the problems posed in editing the papers of Jeremy Bentham, the utilitarian philosopher and occasional economist. Bentham wrote perhaps 15 pages in longhand almost every day ofhis adult life. His admirers gathered some of his work for publication, but much was simply stored in boxes, primarily at the library of University College, London. In 1941, an economist named Werner Stark was commissioned by the Royal Economic Society to prepare a comprehensive edition of Bentham’s economic writings, which in turn are just a portion of his overall writings. Inthe three-volume work that was published 11 years later (!), Stark (1952) wrote in the introduction: 

The work itself involved immense difficulties. Bentham’s handwriting is so bad that it is quite impossible to make anything of his scripts without first copying them out. I saw myself confronted with the necessity of copying no less than nine big boxes of papers comprising nearly 3,000 pages and a number of words that cannot be far from the seven-figure mark. But that was only the first step. The papers are in no kind of order: in fact it is hard to imagine how they ever became so utterly disordered. They resemble a pack of cards after it has been thoroughly shuffled. . . . The pages of some manuscripts, it is true, were numbered, but then they often carried a double and treble numeration so that confusion was worse confounded, and sometimes I wished there had been no pagination at all. In other manuscript collections the fact that sentences run uninterruptedly from one sheet onto another, is of material help in creating order out of chaos. I was denied even this assistance. It was one of Bentham’s idiosyncrasies never to begin a new page without beginning at the same time a new paragraph. But I cannot hope to give the reader an adequate idea of the problems that had to be overcome. 


Stark’s lamentations would chill the heart of any editor. “Bentham was most unprincipled with regard to the use of capitals.” “After careful consideration, it was found impossible to transfer the punctuation of Bentham’s manuscripts on to the printed page. When he has warmed to a subject and is writing quickly, he simply forgets to punctuate . . . ” And so on.

Of course, the task of editing can have some extraordinary payoffs. Making Bentham\’s thoughts available and accessible to readers is of great importance. One can imagine a future in which you will buy the OED as an app for your e-reader or your word-processor, and definitions and past uses will be only a click away. In a 2012 essay \”From the Desk of the Managing Editor,\” written for the 100th issue of the Journal of Economic Perspectives, I tried to describe some of what an editor can hope to accomplish:

Communication is hard. The connection between writer and reader is always tenuous. No article worth the reading will ever be a stroll down the promenade on a summer’s day. But most readers of academic articles are walking through swampy woods on a dark night, squelching through puddles and tripping over sticks, banging their shins into rocks, and struggling to see in dim light as thorny branches rake at their clothing. An editor can make the journey easier, so the reader need not dissipate time and attention overcoming unnecessary obstacles, but instead can focus on the intended pathway. 

Obstacles to understanding arise both in the form of content and argument and also in the nuts and bolts of writing. An editor needs a certain level of obsessiveness in confronting these issues, manuscript after manuscript, for the 1,000 pages that JEP publishes each year. Plotnick (1982, p. 1) writes in The Elements of Editing: “What kind of person makes a good editor? When hiring new staff, I look for such useful attributes as genius, charisma, adaptability, and disdain for high wages. I also look for signs of a neurotic trait called compulsiveness, which in one form is indispensable to editors, and in another, disabling.”

The ultimate goal of editing is to strengthen the connection between authors and readers. Barney Kilgore, who was editor of the Wall Street Journal during its time its circulation expanded dramatically in the 1950s and 1960s, used to post a motto in his office that would terrify any editor (as quoted in Crovitz 2009): “The easiest thing in the world for a reader to do is to stop reading.” An editor can help here, by serving as a proxy for future readers.

Shifting Components of the Dow Jones Industrial Index

The Dow Jones Industrial Index is based on stock prices of 30 large blue-chip companies that in some ill-defined way are supposed to represent the core of the U.S. economy. Over time, some companies are replaced by others. In September, for example, the formerly private investment bank Goldman Sachs replaced the public commercial bank Bank of America; the payments company Visa replaced the information technology company Hewlett-Packard; and the consumer clothing and gear company Nike Inc. replaced Alcoa Inc., which was traditionally an aluminum company but now has its finger in a various elements of design and manufacturing of parts, along with recycling. The changes seemed to me symptomatic of broader changes in the US economy, which made me look back at the companies in the Dow over time.

The first official Dow Jones Index was started in 1896, although Charles Dow had been putting out an earlier index, mainly of railroad stocks, as far back as the 1870s. Here, I\’ll just offer a few comparisons from more recent times. The companies in the Dow stayed unchanged from 1959 to 1976. The first column shows the list of Dow Jones index companies from that time period–call it roughly 40-50 years ago. The second column shows the companies in the Dow from 1994 to 1997, which is a little less than two decades ago. The third column shows the current list. These lists push me to think about how the US economy has been evolving.

As a starting point, compare the 1959-1976 list to the present. There are only six companies on both lists: AT&T (which is of course a vastly different company now than when it was the monopoly provider of U.S. telephone services), DuPont, General Electric, and Proctor and Gamble, Standard Oil (N.J) became Exxon and eventually ExxonMobil, and United Aircraft became United Technologies. A number of companies involving metal are out of the index: Aluminum Company of American, Allied Can, Anaconda Copper, International Nickel, and U.S. Steel, as are companies focused on commodities like American Can, International Paper, and Owens-Illinois Glass.

The new entries are tech companies like 3M, Cisco, IBM, Microsoft, United Technologies, as well as financial companies like American Express, Visa, JP Morgan, Goldman Sachs, and Travellers. Health care related companies, like Merck, Pfizer, and UnitedHealth Group are new entries. The face of  American retailing was Sears in the earlier list; now it is Wal-Mart and Home Depot. the face of food products was General Foods in the earlier list; now it\’s Coca Cola and McDonalds. the face of the oil industry was two Standard Oil companies (!) and Texaco in the earlier list; now it\’s ExxonMobil. International Harvester is off the list; Caterpillar is on.

The middle list is a snapshot of the transition between past and present. By my count, from the 1959-1976 up to the mid-1990s, about half of the 30 companies in the Dow (16 of 30) remained in some form, although several changed their names (Allied Chemical became AlliedSignal, Standard Oil (N.J) became Exxon, United Aircraft became United Technologies). Also, about half of the companies in the Dow in the mid-1990s (15 of 30) are no longer in the index at present. I don\’t claim to know what the \”right\” amount of turnover should be among top companies in a free-market society. But over the time frame of a few decades, the turnover is substantial.

Changes in America\’s Family Structure

When families get together for Thanksgiving and the holidays that follow, the structure of those families is different than a few decades ago. Jonathan Vespa, Jamie M. Lewis, and Rose M. Kreider of the U.S. Census Bureau provide some background in \”America’s Families and Living Arrangements: 2012\” (August 2013). In some ways, none of the trends are deeply surprising, but in other ways, the patterns of households set the stage for our political and economic choices.

As as starting point, here\’s a graph showing changes in households by type. Married households with children were 40.3% of all US households in 1970; in 2012, that share had fallen by more than half to 19.6%. Interestingly, the share of households that were married without children has stayed at about 30%.  Other Family Households, usually meaning single-parent families with children, has risen. Overall, the share of U.S. households that involve a family (either married or with children) was 81% back in 1970, but down to 66% in 2012. The share of households which are men or women living alone has risen. The figures are not the same across gender in part because of differences in older age brackets: \”Nearly three-quarters (72 percent) of men aged 65 and over
lived with their spouse compared with less than half (45 percent) of women.\”

The average number of people in households is falling. The share of households with five or more people has dropped by more than half, from 20.9% in 1979 to 9.6% in 2012. Meanwhile, 46% of households had 1 or 2 people in 1970, and 61% of households had 1 or 2 people in 2012. 
One of the hot topics in the last few years has been the subject of 20-somethings moving home to live in the parents\’ basement. This pattern is visible in the Census data, but it\’s less striking over time than I might have thought, and the pattern started before the onset of the recession. (The data in the early years of this figure come from different statistical surveys than the post-1983 data, so one shouldn\’t make too much of what appears to be a jump from 1980 to 1983.) My guess is that as the share of students enrolling in higher education has risen over time, some share of the rise in 18-24 year-olds living at home is represented by college students. 
Finally, it\’s worth noting that one-quarter of all US children are being raised in single-parent households. On average, these households are below-average in income, and with one parent at home, they are on average less able to provide hours of time at home. 
The structure of households shapes politics and economics. For example, a greater share of adults living alone means a shift in the housing supply away from big houses and toward apartments, and makes it more likely that these single-person households will locate in or near cities rather than in suburban houses. A smaller share of households with children means that when governments set priorities, support for schools will be lower. Households are also a way of sharing risk: a household with two adults has more possibilities for sharing the risk of job loss, or sharing the risk that time needs to be spent dealing with sickness or injury. Therefore, the growth in single-person households tends to mean increased support for social methods of sharing risk, including government programs that support unemployment insurance or health insurance.  To some extent, we are how we live. 

An Economist Chews Over Thankgiving

As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there\’s anything wrong with that.

The last time the U.S. Department of Agriculture did a detailed \”Overview of the U.S. Turkey Industry\” appears to be back in 2007. Some themes about the turkey market waddle out from that report on both the demand and supply sides.

On the demand side, the quantity of turkey per person consumed rose dramatically from the mid-1970s up to about 1990, but since then has declined somewhat. The figure below is from the Eatturkey.com website run by the National Turkey Federation. Apparently, the Classic Thanksgiving Dinner is becoming slightly less widespread.


On the production side, the National Turkey Federation explains: \”Turkey companies are vertically integrated, meaning they control or contract for all phases of production and processing – from breeding through delivery to retail.\” However, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which all the steps of turkey production have become separated and specialized–with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys.  Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:

\”In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg  capacity per hatchery in 2007.

Turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.

Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.

Turkeys have been carefully bred to become the efficient meat producers they are today. In 1986, a turkey weighed an average of 20.0 pounds. This average has increased to 28.2 pounds per bird in 2006. The increase in bird weight reflects an efficiency gain for growers of about 41 percent.\”

U.S. agriculture is full of examples of remarkable increases in yields over a few decades, but they always drop my jaw. I tend to think of a \”turkey\” as a product that doesn\’t have a lot of opportunity for technological development, but clearly I\’m wrong. Here\’s a graph showing the rise in size of turkeys over time.

The production of turkey remains an industry that is not very concentrated, with three relatively large producers and then more than a dozen mid-sized producers. Here\’s a list of top turkey producers in 2011 from the National Turkey Federation



For some reason, this entire post is reminding me of the old line that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. Did I mention that I make an excellent chestnut stuffing?  

Anyway, the starting point for measuring inflation is to define a relevant \”basket\” or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical U.S. household buys. But one can also define a more specific basket of goods if desired, and since 1986, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:

The cost of buying the Classic Thanksgiving Dinner fell about 1% in 2013, compared with 2012. The top line of the graph that follows shows the nominal price of purchasing the basket of goods for the Classic Thanksgiving Dinner. The lower line on the graph shows the price of the Classic Thanksgiving Dinner adjusted for the overall inflation rate in the economy. The line is relatively flat, especially since 1990 or so, which means that inflation in the Classic Thanksgiving Dinner has actually been a pretty good measure of the overall inflation rate.

Thanksgiving is my favorite holiday. Good food, good company, no presents–and all these good topics for conversation. What\’s not to like? 

(Note: This is an updated version of a post that was first published on Thanksgiving Day 2011.)

For those whose appetite for turkey-related economics is not yet satiated, I recommend that you turn next to the article in the New York Times last Sunday by Catherine Rampell, which tackles the puzzle of why the price of frozen turkeys tends to fall right before Thanksgiving, when one might expect demand to be highest. The article is here; a blog post with background information is here.

The Conundrum of EITC Overpayments (and the Health Insurance Exchanges)

Just to be up-front, I’m a long-time fan of the Earned Income Tax Credit. Other government programs to assist those with low incomes can easily discourage work. Imagine that every time a low-income person earns $1, they lose roughly $1 worth of benefits in welfare payments, food stamps, housing vouchers, maybe even losing their Medicaid health insurance. As a result, the incentive to work is dramatically reduced: for earlier posts with details on how this works in practice, see here and here. But with the EITC,  when low-income household (especially those with children) earn income, the federal government pays them an additional tax credit based on their earnings. Thus, the EITC is a reward for working, not a payment conditional on not working. Here’s a nice readable overview of the EITC , from the Center on Budget and Policy Priorities.

But the EITC does have a long-standing problem: a report from the Treasury Inspector General for Tax Administration estimates that about one-fifth of all EITC payments are made to those who don’t actually qualify for them. In 2012, the EITC had $11.6 to $13.6 billion in overpayments in 2012. Or if you want to look back over time, an accumulation of $110 billion to $132 billion over the decade from 2003-2012.

The report is “The Internal Revenue Service Is Not in Compliance With Executive Order 13520 to Reduce Improper Payments” [August 28, 2013, #2013-40-084]). President Obama signed Executive Order 13520 on November 20, 2009, which sought to increase the accountability of federal agencies to reduce improper payments. But as TIGTA reports, the IRS has not succeeded reducing these EITC overpayments. On the other side of the coin, TIGTA writes, “The IRS estimates that the participation rate for individuals who are eligible to receive the EITC is between 78 and 80 percent.”

Why is this problem so difficult? And if the federal government runs the EITC with a 20% error rate, what are the chances it can run the health care exchanges more effectively?

Here’s the EITC enforcement conundrum: The program is spending 20% or so of its funds on millions of households those who aren’t eligible, while not reaching millions of households who are eligible. But the amounts for any given household are small. The CBPP summary mentioned above writes: “During the 2010 tax year, the average EITC was $2,805 for a family with children and $262 for a family without children.” In addition, low-income people are moving on and off the program all the time. TIGTA writes: “Studies show that approximately one-third of EITC claimants each year are intermittent or first-time claimants.” It notes that some of the cases of EITC overpayment are just plain fraud, sometimes aided and abetted by those who prepare taxes. About two-thirds of tax applications that claim the EITC are filled out by outside preparers. But there are also lots of gray-area situations where the complexity of tax law and the EITC provisions, and general confusion makes it uncertain whether someone is eligible.

It won’t make economic sense for the IRS to hire a bunch of well-paid tax auditors to delve into the complex and incomplete financial records, home lives, and tax returns of millions of low-income families, hoping to recover a few hundred or even a couple of thousand dollars per family. Thus, the TIGTA report argues: “The IRS must develop alternatives to traditional compliance methods to significantly reduce EITC improper payments.” The IRS has made some efforts to communicate the law more clearly to paid tax preparers. But as TIGTA reports, “the IRS has made little improvement in reducing improper EITC payments as a whole …”

The federal government can do some large-scale programs pretty well. For example, it sends out Social Security checks in a cost-effective manner. With more of a paperwork struggle, it manages to cope with annual tax returns and Medicare and Medicaid payments to health care providers.

But the EITC is a program that involves complex rules for eligibility and size of payments, much more complex than Social Security. The EITC is aimed at low-income people, many of whom have economic and personal lives that are in considerable flux and many of whom have limited ability to deal with detailed paperwork, unlike the health care providers who receive Medicare and Medicaid payments. The envisioned health insurance exchanges are likely to end up serving many more people than the EITC. The complexity of decisions about buying health insurance is greater than the complexity of qualifying for cash payments from the EITC. When people’s eligibility for subsidies is moving and changing each year, as it is for the EITC and will also be for the health insurance exchanges, it will be difficult for the federal government to sort out eligibility. And as the complexity of the rules rises, it will spawn a network of people to help in filling out the forms, most of whom will be honest and forthright, but some of whom will focused on making people eligible for the largest possible subsidies, with little concern for legal qualifications.

I remain a fan of the EITC, but I confess that as a matter of practical administration, I’m not at all sure of how to substantially reduce its persistent problem of overpayments in a cost-effective manner. Part of the answer probably involves finding ways to simplify the rules and the interface with recipients, so that eligibility can be more clear-cut. In addition, I suspect that practical problems that cause a mix of over- and underpayments for the EITC will be dwarfed by the practical problems and error rates in deciding about eligibility for subsidies in the health insurance exchanges–even if or when the web interface itself becomes functional.

An Okun\’s Law Refresher

Okun\’s law–which is really more of a rule of thumb–holds that for each increase of one percentage point in real GDP, the unemployment rate would fall by 0.3 percentage points. Arthur Okun formulated this rule in a 1962 research paper, called “Potential GNP: Its Measurement and Significance,” which appeared in the Proceedings of the Business and Economics Statistics Section of the American Statistical Association (pp. 98-103). It\’s available as a Cowles Foundation working paper here. Michael T. Owyang, Tatevik Sekhposyan and E. Katarina Vermann take a look at the current state of Okun\’s law as the U.S. economy struggles with sluggish growth and a frustratingly gradual decline in its unemployment in \”Output and Unemployment How Do They Relate Today?\” which appears in the October 2013 issue of the Regional Economist, published by the Federal Reserve Bank of St. Louis. They argue that Okun\’s law has actually held up quite well over time.

Consider first the evidence roughly as Okun would have seen it back in the early 1960s. This figure shows the  quarterly change in the unemployment rate and the quarterly growth rate in output from 1948-1960. Economic data series like GDP are updated over time, so this isn\’t quite the same data that Okun used. But it\’s close. Notice that the main pattern of the points is a downward slope: that is, a negative growth rates of GDP is correlated with a rise in unemployment, while a rise in GDP is correlated with a fall in unemployment.

Now here is the same graph, but including all the quarters from 1948 up into 2013. Time periods are distinguished by the shape of the points: 1948-1960 is blue squares, 1961-2007 is black dots,  and 2008-2013 is red triangles. They estimate that the Okun\’s law relationship over this time is that, on average, a 1 percentage point rise in the growth rate of real GDP is associated with a 0.28 percentage point fall in the unemployment rate–almost exactly the same as what Okun found in 1962.

The Triffin Dilemma and U.S. Trade Deficits

Martin Feldstein interviews Paul Volcker in the most recent issue of the Journal of Economic Perspectives. The interview ranges from the 1970s up to the present, and I commend it to you in full. But one of Volcker\’s comments in particular sent me scurrying to learn more. At one point in the interview, Volcker says: \” I think we’re back, in a way, in the Triffin dilemma.\” And I thought to myself: \”What the heck is that?\”

Here\’s an discussion of the Triffin dilemma from an IMF website describing problems in the international monetary system from 1959 to 1971, before the Bretton Woods system of (mostly) fixed exchange rates cratered. The IMF states:

If the United States stopped running balance of payments deficits, the international community would lose its largest source of additions to reserves. The resulting shortage of liquidity could pull the world economy into a contractionary spiral, leading to instability. … If U.S. deficits continued, a steady stream of dollars would continue to fuel world economic growth. However, excessive U.S. deficits (dollar glut) would erode confidence in the value of the U.S. dollar. Without confidence in the dollar, it would no longer be accepted as the world\’s reserve currency. The fixed exchange rate system could break down, leading to instability.\”

In short, the U.S. needs to run trade deficits, because the rest of the world wants to  hold U.S. dollars as a safe asset, and the way the rest of the world gets U.S. dollars is when the U.S. economy buys imported products. However, if U.S. trade deficits go on for too long or become too large, then the U.S. dollar might stop looking like a safe asset, which in turn could bring its own waver of global financial disruption. Here\’s how Volcker describes the Triffin dilemma in the interview:

\”In the 1960s, we were in a position in the Bretton Woods system with the other countries wanting to run surpluses and build their reserve positions, so the reserve position of the United States inevitably weakened—weakened to the point where we no longer could support the convertibility of currencies to gold. Now, how long can we expect as a country or world to support how many trillions of dollars that the rest of the world has? So far, so good. The rest of the world isn’t in a very good shape, so we look pretty good at the moment. But suppose that situation changes and we’re running big [trade] deficits, and however many trillion it is now, it’s another few trillions. At some point there is vulnerability there, I think, for the system, not just for the United States. We ought to be conscious of that and do something about it.\”

Triffin laid out his views in a 1960 book called \”Gold and the dollar crisis”, which is not all that easily available. But his 1960 Congressional testimony available here offers an overview. However, the Triffin dilemma as it exists today, in a world of flexible exchange rates, is not the same as it was back in 1960. (Of course, this is also why Volcker said that \”in a way,\” we had returned to the Triffin dilemma.) Lorenzo Bini Smaghi, a member of the Executive Board of the European Central Bank, laid out some of the similarities and differences in 2011 in a talk, \”The Triffin dilemma revisited.\”

Smaghi notes the arrival of flexible exchange rates, but argues that the biggest change affecting the Triffin dilemma is something else, the arrival of multi-directional capital flows and private sector capital flows in the global financial economy means that there are now lots of ways for other countries to get the U.S. dollars (or euros) that they wish to hold a safe asset, without the U.S. (or the euro area) necessarily running a trade deficit. Smaghi said:

Today, the United States and the euro area are not obliged to run rising current account deficits to meet the demand for dollars or euros. This is for two main, interlinked reasons. First, well-functioning, more liquid and deeply integrated global financial markets enable reserve-issuing countries to provide the rest of the world with safe and liquid financial liabilities while investing a corresponding amount in a wide range of financial assets abroad. The euro has indeed become an important international currency since its inception and the euro area has been running a balanced current account. In a world where there is no longer a one-to-one link between current accounts, i.e. net capital flows and global liquidity, a proper understanding of global liquidity also needs to include gross capital flows. Second, under BW [Bretton Woods] global liquidity and official liquidity were basically the same thing, but today the “ease of financing” at global level also crucially depends on private liquidity directly provided by financial institutions, for instance through interbank lending or market making in securities markets. Given the endogenous character of such private liquidity, global official and private liquidity have to be assessed together for a proper evaluation of global liquidity conditions at some point in time, and there is no endemic shortage of global liquidity, as the empirical evidence confirms. This is not to deny that temporary shortages can occur, as happened after the bankruptcy of Lehman Brothers in September 2008. But such shortages are a by-product of shocks and boom-bust cycles, not an intrinsic feature of the IMS [international monetary system], and can be tackled with an appropriate global financial safety net.

So has the Triffin dilemma been eliminated. Smaghi argues not. He points out that there is a growing and widespread demand for U.S. dollar holdings around the world from emerging market economies that want to hold U.S. dollar reserves as a hedge against sudden capital outflows or to keep their own currency undervalued, as well as from oil and other commodity exporters who prefer to hold their accumulated trade surpluses in the form of U.S. dollars. And when economic shocks occur, demand for these safe U.S. dollar assets can rise and fall in ways that threaten economic stability.
Triffin\’s policy solution, back in the day, was the creation of a global central bank that would issue a new \”reserve unit\” which central banks could hold as a safe asset, but would not be linked to gold or currencies. While Volcker doesn\’t endorse that approach, he also says that the way out lies in international monetary reform.  Smaghi discusses the possibility that the Triffin dilemma might be much reduced in a multi-polar international monetary system, where the U.S. dollar remains quite prominent, but those seeking safe assets and can turn to a variety of other currencies, too.

This discussion may seem abstruse, so let me pull it back to some headline economic statistics. There was once a time, not so very long ago, when many people used to worry about what would happen if the U.S. economy ran sustained and large trade deficits.  Through most of the 1960s and 1970s, the U.S. was fairly close to balanced trade. The first big drop in the U.S. trade deficit happened in the mid-1980s, but then trade deficits since the late 1990s have been just as big, or bigger, than back in the 1980s. Here\’s a figure showing U.S. trade deficits as a share of GDP since the time Triffin enunciated his dilemma up to the present.

I suspect that the U.S. trade deficits in recent decades would have seemed almost impossibly large to Triffin and others back in 1960. But in recent decades, he rest of the world has wanted to hold trillions of U.S. dollars as a safe asset, and so the U.S. economy could import to its heart\’s content, send the U.S. dollars abroad and run enormous trade deficits. But at some point, the accumulation of U.S. trade deficits becomes so sustained and large that it will lead to economic disruption. As Paul Volcker says: \”We ought to be conscious of that and do something about it.\”

(Thanks to Iva Djurovic for creating the trade deficit figure.)