Facts on U.S. Income Distribution, Before and After Taxes

However much and in whatever direction your knee-jerk reflexes twitch when the subject of income distribution arises, it\’s useful to start with a grounding of facts. The Congressional Budget Office lays out many of the key facts in November 2014 \”The Distribution of Household Income and Federal Taxes, 2011.\”

For starters, here\’s an overview of the distribution of before-tax, after-transfer, and after-tax income for the US. population. The population is first divided into fifths, or \”quintiles,\” according to market income–which includes labor, business, and capital income.

Here are a few thoughts:

1) There\’s a nice illustration here of the difference between median and average. The average household income for the middle quintile is $49,800. This will also be roughly the median income for a U.S. household: that is, the level where 50% of households have more and 50% have less. But the average market income for all U.S. households as shown in the last column is $80,600, because the incomes of those at the top are so high that they pull up the average for the distribution as a whole.

2) It\’s interesting that government transfers for those in the bottom quintile are smaller than those for other income groups. To be clear, the value of these government transfers includes cash payments from all levels of government–federal, state, and local–and also includes the value of in-kind transfers like Medicaid, Food Stamps, and Medicare.

3) Federal taxes rise steadily with income levels, as one would hope and expect.

What have patterns of market income and federal taxes looked like over time? First, here is the change in  market income over time. This also is divided by the lowest quintile, the three middle quintiles, the top quintile minus the top 1%, and then the top 1%.

Again, a few thoughts:

1) If one was only looking at the comparison of the bottom four quartiles with the 81st-99th percentile, the growth in inequality of income would actuall be fairly stark. Over this time, the bottom four quintiles have both seen an increase in market income of 16%, while the 81st to 99th percentile group has seen a rise of 56%. When I look at this divergence, I find that I am comfortable with an explanation involving higher returns to skilled labor as the information and communications technology revolution has taken place.

2) But then there\’s the 1% line. The construction of the figure requires that all the lines start in the same place at 0% change in 1979, but the 1% is almost immediately rising faster than the other groups. Remember that this is a rise in market income, not a a change in after-tax income that could be directly related to changes in tax rates at the top. The 1% line spikes in the 1990s, with the dot-com boom, and then falls, spike again with the resurgence of the housing bubble and stock market resurgence before the Great Recession, and then falls again. It is hard to look at the 1% line and describe it as a smooth rise in the returns to skilled labor; it looks a lot more like the 1% are receiving a greater share of their income related to stock market gains, in ways that rise and fall with the market.

3) When I post this kind of figure, I often receive notes telling me to beware of the fact that the top 1% isn\’t the same each year, but over time is instead a rotating group. The point is a fair one. But it\’s also worth noting that there is no evidence of which I am aware suggesting that movement in and out of different income groups has increased over time. Thus, what is clearly a greater inequality of income does not appear to be offset by greater mobility.

What about the path of federal tax rates? Here\’s the path of tax rates over time that includes all federal taxes: that is, federal income taxes, payroll taxes, corporate income taxes (attributed back to individuals). and excise taxes. Again, the division here is top 1%, 81st-99th percentile, middle three quintiles, and bottom quintile. Notice that these are average tax rates. Thus, a person at the very top of the income distribution might well be paying a tax rate on the marginal dollar of market income received of 40% or more, but the average tax rate for that same person over all income received could well by the 29% shown for 2011 in the figure.

Clearly, there is some fall and rise and fall againin the aveage tax rates paid by the top 1%. There is also some fall since the mid-1990s in average federal tax rates on income paid by all income groups. But the changes in average tax rates are clearly much, much smaller than the change in market income levels, which are what is really driving the rise in inequality.

As I have pointed out on this blog before, various reports have emphasized the theme that when after-tax, after-benefit inequality in the U.S. economy is compared to other high-income countries, the greater inequality in the U.S. economy is not primarily driven by the fact that the U.S. tax system is less progressive than that of other countries, which doesn\’t actually seem to be true, but by the fact that the benefits paid by the U.S. government are not as targetted to those with lower incomes as the benefits paid in other countries. For example, here\’s a discussion of an OECD report on this theme, and here\’s a discussion of a CBO report noting that while U.S. redistribution via the tax code hasn\’t changed much in recent decades, redistribution via government spending has declined.

Fall 2014 Journal of Economic Perspectives

One of my hobbies is blogging as the Conversable Economist. But my actual paid job since 1986 has been Managing Editor of the Journal of Economic Perspectives. The journal is published by the American Economic Association, which several years back made the decision–much to my delight–that the journal wouuld be freely available on-line, from the current issue back to the first issue in 1987. The journal\’s website is here, and this is the Table of Contents for the Fall 2014 issue, with direct links to the papers. I will include the abstracts below, and will probably blog about some of the individual papers in the next week or two, as well.

Symposium: Social Networks
\”Networks in the Understanding of Economic Behaviors,\” by Matthew O. Jackson
Full-Text Access
\”From Micro to Macro via Production Networks,\” by Vasco M. Carvalho
Full-Text Access
\”Community Networks and the Process of Development,\” by Kaivan Munshi
Full-Text Access

Symposium: Tax Enforcement and Compliance
\”How Can Scandinavians Tax So Much?\” by Henrik Jacobsen Kleven
Full-Text Access
\”Why Do Developing Countries Tax So Little?\” by Timothy Besley and Torsten Persson
Full-Text Access
\”Taxing across Borders: Tracking Personal Wealth and Corporate Profits,\” by Gabriel Zucman
Full-Text Access
\”Tax Morale,\” by Erzo F. P. Luttmer and Monica Singhal
Full-Text Access

Articles
\”The Economics of Guilds,\” by Sheilagh Ogilvie
Full-Text Access
\”The Wages of Sinistrality: Handedness, Brain Structure, and Human Capital Accumulation,\” by Joshua Goodman
Full-Text Access

Features

\”Retrospectives: The Cold-War Origins of the Value of Statistical Life,\” by H. Spencer Banzhaf
Full-Text Access
\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access
Correspondence: \”The Missing Middle,\” by James Tybout
Full-Text Access

_______________________

Abstracts for the Fall 2014 Journal of Economic Perspectives 

\”Networks in the Understanding of Economic Behaviors\”
Matthew O. Jackson
As economists endeavor to build better models of human behavior, they cannot ignore that humans are fundamentally a social species with interaction patterns that shape their behaviors. People\’s opinions, which products they buy, whether they invest in education, become criminals, and so forth, are all influenced by friends and acquaintances. Ultimately, the full network of relationships- how dense it is, whether some groups are segregated, who sits in central positions- affects how information spreads and how people behave. Increased availability of data coupled with increased computing power allows us to analyze networks in economic settings in ways not previously possible. In this paper, I describe some of the ways in which networks are helping economists to model and understand behavior. I begin with an example that demonstrates the sorts of things that researchers can miss if they do not account for network patterns of interaction. Next I discuss a taxonomy of network properties and how they impact behaviors. Finally, I discuss the problem of developing tractable models of network formation.
Full-Text Access | Supplementary Materials

\”From Micro to Macro via Production Networks\”
Vasco M. Carvalho
A modern economy is an intricately linked web of specialized production units, each relying on the flow of inputs from their suppliers to produce their own output which, in turn, is routed towards other downstream units. In this essay, I argue that this network perspective on production linkages can offer novel insights on how local shocks occurring along this production network can propagate across the economy and give rise to aggregate fluctuations. First, I discuss how production networks can be mapped to a standard general equilibrium setup. In particular, through a series of stylized examples, I demonstrate how the propagation of sectoral shocks- and hence aggregate volatility- depends on different arrangements of production, that is, on different \”shapes\” of the underlying production network. Next I explore, from a network perspective, the empirical properties of a large-scale production network as given by detailed US input-output data. Finally I address how theory and data on production networks can be usefully combined to shed light on comovement and aggregate fluctuations.
Full-Text Access | Supplementary Materials

\”Community Networks and the Process of Development\”
Kaivan Munshi
My objective in this paper is to lay the groundwork for a new network-based theory of economic development. The first step is to establish that community-based networks are active throughout the developing world. Plenty of anecdotal and descriptive evidence supports this claim. However, showing that these networks improve the economic outcomes of their members is more of a challenge. Over the course of the paper, I will present multiple strategies that have been employed to directly or indirectly identify network effects. The second step is to look beyond a static role for community networks, one of overcoming market failures and improving the outcomes of their members in the short-run, to examine how these informal institutions can support group mobility. A voluminous literature documents the involvement of communities in internal and international migration, both historically and in the contemporary economy. As with the static analysis, the challenge here is to show statistically that community networks directly support the movement of groups of individuals. I will show how predictions from the theory can be used to infer a link between networks and migration in very different contexts.
Full-Text Access | Supplementary Materials

\”How Can Scandinavians Tax So Much?\”
Henrik Jacobsen Kleven
American visitors to Scandinavian countries are often puzzled by what they observe: despite large income redistribution through distortionary taxes and transfers, these are very high-income countries. They rank among the highest in the world in terms of income per capita, as well as most other economic and social outcomes. The economic and social success of Scandinavia poses important questions for economics and for those arguing against large redistribution based on its supposedly detrimental effect on economic growth and welfare. How can Scandinavian countries raise large amounts of tax revenue for redistribution and social insurance while maintaining some of the strongest economic outcomes in the world? Combining micro and macro evidence, this paper identifies three policies that can help explain this apparent anomaly: the coverage of third-party information reporting (ensuring a low level of tax evasion), the broadness of tax bases (ensuring a low level of tax avoidance), and the strong subsidization of goods that are complementary to working (ensuring a high level of labor force participation). The paper also presents descriptive evidence on a variety of social and cultural indicators that may help in explaining the economic and social success of Scandinavia.
Full-Text Access | Supplementary Materials

\”Why Do Developing Countries Tax So Little?\”
Timothy Besley and Torsten Persson
Low-income countries typically collect taxes of between 10 to 20 percent of GDP while the average for high-income countries is more like 40 percent. In order to understand taxation, economic development, and the relationships between them, we need to think about the forces that drive the development process. Poor countries are poor for certain reasons, and these reasons can also help to explain their weakness in raising tax revenue. We begin by laying out some basic relationships regarding how tax revenue as a share of GDP varies with per capita income and with the breadth of a country\’s tax base. We sketch a baseline model of what determines a country\’s tax revenue as a share of GDP. We then turn to our primary focus: why do developing countries tax so little? We begin with factors related to the economic structure of these economies. But we argue that there is also an important role for political factors, such as weak institutions, fragmented polities, and a lack of transparency due to weak news media. Moreover, sociological and cultural factors- such as a weak sense of national identity and a poor norm for compliance- may stifle the collection of tax revenue. In each case, we suggest the need for a dynamic approach that encompasses the two-way interactions between these political, social, and cultural factors and the economy.
Full-Text Access | Supplementary Materials

\”Taxing across Borders: Tracking Personal Wealth and Corporate Profits\”
Gabriel Zucman
This article attempts to estimate the magnitude of corporate tax avoidance and personal tax evasion through offshore tax havens. US corporations book 20 percent of their profits in tax havens, a tenfold increase since the 1980; their effective tax rate has declined from 30 to 20 percent over the last 15 years, and about two-thirds of this decline can be attributed to increased international tax avoidance. Globally, 8 percent of the world\’s personal financial wealth is held offshore, costing more than $200 billion to governments every year. Despite ambitious policy initiatives, profit shifting to tax havens and offshore wealth are rising. I discuss the recent proposals made to address these issues, and I argue that the main objective should be to create a world financial registry.
Full-Text Access | Supplementary Materials

\”Tax Morale\”
Erzo F. P. Luttmer and Monica Singhal
There is an apparent disconnect between much of the academic literature on tax compliance and the administration of tax policy. In the benchmark economic model, the key policy parameters affecting tax evasion are the tax rate, the detection probability, and the penalty imposed conditional on the evasion being detected. Meanwhile, tax administrators also tend to place a great deal of emphasis on the importance of improving \”tax morale,\” by which they generally mean increasing voluntary compliance with tax laws and creating a social norm of compliance. We will define tax morale broadly to include nonpecuniary motivations for tax compliance as well as factors that fall outside the standard, expected utility framework. Tax morale does indeed appear to be an important component of compliance decisions. We demonstrate that tax morale operates through a variety of underlying mechanisms, drawing on evidence from laboratory studies, natural experiments, and an emerging literature employing randomized field experiments. We consider the implications for tax policy and attempt to understand why recent interventions designed to improve morale, and thereby compliance, have had mixed results to date.
Full-Text Access | Supplementary Materials

\”The Economics of Guilds\”
Sheilagh Ogilvie
Occupational guilds in medieval and early modern Europe offered an effective institutional mechanism whereby two powerful groups, guild members and political elites, could collaborate in capturing a larger slice of the economic pie and redistributing it to themselves at the expense of the rest of the economy. Guilds provided an organizational mechanism for groups of businessmen to negotiate with political elites for exclusive legal privileges that allowed them to reap monopoly rents. Guild members then used their guilds to redirect a share of these rents to political elites in return for support and enforcement. In short, guilds enabled their members and political elites to negotiate a way of extracting rents in the manufacturing and commercial sectors, rents that neither party could have extracted on its own. First, I provide an overview of where and when European guilds arose, what occupations they encompassed, how large they were, and how they varied across time and space. I then examine how guild activities affected market competition, commercial security, contract enforcement, product quality, human capital, and technological innovation. The historical findings on guilds provide strong support for the view that institutions arise and survive for centuries not because they are efficient but because they serve the distributional interests of powerful groups.
Full-Text Access | Supplementary Materials

\”The Wages of Sinistrality: Handedness, Brain Structure, and Human Capital Accumulation\”
Joshua Goodman
Left- and right-handed individuals have different neurological wiring, particularly with regard to language processing. Multiple datasets from the United States and the United Kingdom show that lefties exhibit significant human capital deficits relative to righties. Lefties score 0.1 standard deviations lower on cognitive skill measures, have more behavioral problems, have more learning disabilities such as dyslexia, complete less schooling, and work in occupations requiring less cognitive skill. Most strikingly, lefties have 10-12 percent lower annual earnings than righties, much of which can be explained by observable differences in cognitive skills and behavioral problems. Lefties work in more manually intensive occupations than do righties, further suggesting their primary labor market disadvantage is cognitive rather then physical. I argue here that handedness can be used to explore the long-run impacts of differential brain structure generated in part by genetics and in part by poor infant health.
Full-Text Access | Supplementary Materials

\”Retrospectives: The Cold-War Origins of the Value of Statistical Life\”
H. Spencer Banzhaf
This paper traces the history of the \”Value of Statistical Life\” (VSL), which today is used routinely in benefit-cost analysis of life-saving investments. The \”value of statistical life\” terminology was introduced by Thomas Schelling (1968) in his essay, \”The Life You Save May Be Your Own.\” Schelling made the crucial move to think in terms of risk rather than individual lives, with the hope to dodge the moral thicket of valuing \”life.\” But as recent policy debates have illustrated, his move only thickened it. Tellingly, interest in the subject can be traced back another twenty years before Schelling\’s essay to a controversy at RAND Corporation following its earliest application of operations research to defense planning. RAND wanted to avoid valuing pilot\’s lives but the Air Force insisted they confront the issue. Thus, the VSL is not only well acquainted with political controversy; it was born from it.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading\”
Timothy Taylor
Full-Text Access | Supplementary Materials

\”Correspondence: The Missing Middle\”
James Tybout
Full-Text Access | Supplementary Materials

Why Experts Buy Generic

You are buying an over-the-counter medication. You see several shelves of brand-name medications of the type you want, along with a generic store brand that is typically much cheaper. Do you buy the cheaper generic version? And perhaps more interesting, if you were standing beside an expert who really knew what was in all of the medications, would the expert buy the cheaper generic version?

Bart J. Bronnenberg, Jean-Pierre Dubé, Matthew Gentzkow, and Jesse M. Shapiro tackle this question in \”Do Pharmacists Buy Bayer? Informed Shoppers and the Brand Premium.\” It\’s available as National Bureau of Economic Research Working Paper #20295, August 2014. (NBER working papers are not freely available on-line to everyone, but they are inexpensively available, and  many readers will have access through library subscriptions.) They start the paper by pointing out the example of aspirin as a generic equivalent (citations omitted).

A 100-tablet package of 325mg Bayer Aspirin costs $6.29 at cvs.com. A 100-tablet package of 325mg CVS store-brand aspirin costs $1.99. The two brands share the same dosage, directions, and active ingredient. Aspirin has been sold in the United States for more than 100 years, CVS explicitly directs consumers to compare Bayer to the CVS alternative, and CVS is one of the the largest pharmacy chains in the country, with presumably little incentive to sell a faulty product. Yet the prevailing prices are evidence that some consumers are willing to pay a three-fold premium to buy Bayer. 

A short readable overview of the paper is available from NBER here, and here are some notable findings from the overview.

In a detailed case study of headache remedy purchases, the researchers find that more-informed consumers are less likely to pay extra to buy national brands, with pharmacists choosing them over store brands only 9 percent of the time, compared with 26 percent of the time for the average consumer. Similarly, chefs devote 12 percentage points less of their purchases of kitchen staples to national brands than otherwise similar non-chefs. …

Controlling for household income, other demographics, and the market, chain, and quarter in which the purchase is made, a household whose primary shopper correctly identifies all active ingredients in a national brand has an 85 percent chance of purchasing a store brand, 19 percentage points higher than a shopper who identifies none of the ingredients. … When the primary shopper is either a pharmacist or a physician, the probability of purchasing the store brand is 91 percent, 15 percentage points higher than the probability of otherwise similar buyers who are not in these fields. Primary shoppers who were science majors in college buy more store brands than those with other college degrees. In a second case study of pantry staples such as salt, sugar, and baking powder, the researchers find that chefs devote nearly 80 percent of their purchases to store brands, compared with 60 percent for the average consumer.

The overall pattern is clear: those who are less knowledgeable are more likely to buy brand names, presumably because they feel that there is quality difference in doing so. Those who are more knowledgeable are more likely to go with generic equivalents, because they feel comfortable making their own judgements about quality–and then going with the lower price.

The data for the study comes from the Nielsen Homescan Panel. It includes information on  purchases made on more than 77 million shopping trips by 125,114 households between 2004 and 2011. People in the panel scan barcodes for all consumer packaged goods they buy. As a result, the data includes detailed information on the product and price, as well as when and where it was bought. The Nielson survey also has basic information about houeholds, like level of composition of household, education, income, race, age, homeownership, and area of residence. The researchers supplemented this data by doing their own survey to gather more information on the specific jobs held by the panelists, and whether panelists can name the specific ingredients in various products. The researchers can then use this mass of data to compare the buying patterns of different groups between branded and generic goods. Here\’s their bottom line:

We estimate that consumers spend $196 billion annually in consumer packaged goods categories in which a store-brand alternative to the national brand exists, and that they would spend approximately $44 billion less (at current prices) if they switched to the store brand whenever possible. If consumers are systematically misled by brand claims, this has clear implications for evaluating the welfare effects of the roughly $140 billion spent on advertising each year in the US, and for designing federal regulation to minimize the potential for harm …

Life Expectancy Risk and Annuities

In a series of television ads for Ameriprise, spokeman Tommy Lee Jones asks some version of the question: \”Retirement. Will You Outlive Your Money?\”  Katharine G. Abraham and Benjamin H. Harris tackle various aspects of this question in a November 6 research paper from the Economic Studies group at the Brookings Institution: \”Better Financial Security in Retirement? Realizing the Promise of Longevity Annuities.\”

If everyone knew precisely how long they were going to live, along with what expenses needed to be incurred along the way, retirement planning would be considerably easier. But a lot of Americans seem underestimate how long they will live. A survey done back in 1991-92 asked Americans who at that time were aged 58-61 what percentage chance they had of living to age 75. Enough time has now passed that we know how many actually lived to age 75. For example, the table below shows that of those who said they had a 0% chance of living to age 75, 49.2% did actually live to age 75, as did 59.9% of those who thought they had a 10% chance of reaching 75, and 64.6% of those who thought they had a 20% chance of reaching 75. Again, this survey wasn\’t asking 20 year-olds or 30 year-old, but was asking people who were around age 60–and who presumably knew something about their health status.

Indeed, the table does suggest that people do know something about their health status. Those who thought they had a better chance of living to age 75 mostly do live longer. But those who were presumably in poorer or average health or more pessimistic for other reasons, and who said that they had a 70% or less chance of living to age 75, seem to systematically underestimate how long they are likely to live. At the other end, those who are in the best of health or more optimistic, who thought they had a 90 or 100% chance of living to age 75, tend to overestimate their chances.

Annuities are the straightforward financial tool that provides insurance against running out of money before the end of life: basically, you pay for the annuity up front, and then the company guarantees you a stream of payments for the rest of your life.

But many people don\’t like the idea of annuities, for a variety of reasons. Many people don\’t like the idea that they will buy an annuity and then die sooner than expected, thus \”losing money\” on the annuity. (Of course, it is an unavoidable reality of insurance that people should be happy if they pay for the insurance year after year, but end up never needing to use it because they don\’t have the accident or problem for which they bought insurance in the first place. But most people dislike this aspect of insurance.) Other people fear that they might need to make a large expense in the future, perhaps for health care or to help a family member, and if they have annuitized a large share of their retirement wealth they would lose that flexibility. Some may figure that they already have an life-long annuity, in the form of Social Security, so they don\’t wish to put any more of their wealth into annuity form. Some people fear, with reason, that the markets for annuities in the past didn\’t always offer that good a deal in terms of what it cost to guarantee a certain stream of income in the future, so that they would rather sit down with a financial adviser and plan their own path for spending.

For an in-depth discussion of these issues, I recommend an essay on \”Annuitization Puzzles\”  by Shlomo Benartzi, Alessandro Previtero and Richard H. Thaler in the Fall 2011 issue of the Journal of Economic Perspectives. (Full disclosure: I\’ve been Managing Editor of the JEP since 1987, so I am predisposed to think the articles are of consuming interest.) These authors argue that many people do very much like annuities. For example, Social Security is an annuity program, and it is quite popular. Most of the people who are currently receiving a \”defined benefit\” pension plan that pays a steady amount for the rest of their lives are not eager to switch over to a \”defined contribution\” plan where they would need to manage their own wealth. They argue that the lack of annuity purchasing is due more to the decision-making hurdles that face people who are thinking about buying annuities. They write: \”An annuity should be viewed as a risk-reducing strategy, but it is instead often considered a gamble: “Will I live long enough for this to pay off?”\”

However, those who believe that many people have a pent-up demand for annuities do face a difficult empirical puzzle. The decision about what age to claim Social Security can be viewed as a decision about implicitly buying an annuity. Consider a person who retires at age 62 or 65, but lives off their saving for several years and doesn\’t claim Social Security until age 70. In effect, that person is \”buying\” an annuity by not receiving Social Security payments sooner, and in exchange will receive a larger monthly Social Security check because of deferring the start of payments until age 70. For many people, this \”Retire Now, Social Security Later\” option would make them better off over their lifetime. But again, many people worry that if they don\’t start collecting Social Security as soon as possible, they will die in the near future and thus would have \”wasted\” their Social Security benefits.

The specific focus for Abraham and Harris are so-called \”longevity annuities.\” With a standard annuity, you pay a big chunk of money up front, and then receive a stream of payments for the rest of your life. With a longevity annuity, you pay a big chunk of money up front, wait 10 or 20 years, and then receive a stream of payments for the rest of your life. Of course, the benefit is that if you pay now and wait to receive the payments until later, the payments can be larger–even considerably larger. Here\’s a table showing the payoffs for a man or woman buying an annuity at age 60. If the man starts receiving payments immediately, he gets $534 per month. If the man waits for 15 years, his monthly payments would be about three times as much.

The potential benefit of longevity annuities is that they are truly insurance against outliving your assets, by offering a relatively large payoff if you live to a longer age than you expect. For example, a person could buy a longevity annuity that is set to kick in at age 80 or 85, and then figure that they can pretty much spend the rest of their wealth before that age, secure in the knowledge that they have a backstop in place if they live longer than expected. The tradeoff, like all insurance policies, is that if you don\’t reach the age where the longevity annuity kicks in, your money ends up being paid to someone else who lived longer than expected.

The market for longevity annuities is growing, but is still small. Abraham and Harris write:

\”While the market for deferred-income annuities (DIAs) has blossomed in recent years, many DIAs are sold to individuals in their 50s and almost all are sold with deferral periods of less than 15 years. The current market for true longevity annuities remains
very thin. … After managing just $50 million in sales a few years earlier, deferred income annuities reached $2 billion in sales in 2013. … One risk that a standard longevity annuity contract does not address is inflation risk. … [W]e are not aware of any currently offered longevity annuity product that includes an inflation protection option …\”

With the spread of 401(k) and IRA and other tax-deferred retirement accounts in recent decades, more and more people are going to face the question of whether to buy annuities in the future. Abraham and Harris look at the distribution of funds that people have in these kinds of accounts. They find that the bottom half or so little or nothing saved in a defined contribution account–in part becasue many of them don\’t have such an account in the first place. But among those in the 55-64 age bracket, the top 25% have $143,000 or more in such an account, and the top 10% have $463,000 or more in such an account.

Annuities may turn out to be one of those products that people don\’t like to buy, but after they have taken the plunge, they are glad that they brought. One can imagine an option where some degree of annuitization of wealth could be built into 401(k) and IRA accounts. For example, it might be that the default option is that 30% of what goes into your 401(k) or IRA goes to a regular annuity that kicks in when you retire, another 20% goes to a longevity annuity that kicks in at age 80, and the other 50% is treated like a current retirement account, where you can access the money pretty much as you desire after retirement. If you wanted to alter those defaults, you could do so. But experience teaches that many people would stick with the default options, just out of sheer inertia–and that many of them would be glad to have some additional annuity income after retirement.

The Two New Tools of Monetary Policy

Every basic exposition of how monetary policy is conducted before about 2008 is soon to become obsolete. The three basic monetary policy tools that used to be taught in almost every introductory economics course were changing reserve requirements, changing the discount rate, and conducting open market operations. Because of the way in which monetary policy was conducted during the Great Recession, all three tools have become essentially irrelevant. In the future, The Federal Reserve will mainly influence interest rates through two quite different tools that didn\’t even exist before 2008: primarily by using the rate of interest that it chooses to pay on bank reserves, and secondarily using its new reverse repo facility.

There\’s no secret about these new tools.  For example, here\’s an explanation from September 16-17 meeting of Federal Open Market Committee:

  • When economic conditions and the economic outlook warrant a less accommodative monetary policy, the Committee will raise its target range for the federal funds rate. 
  • During normalization, the Federal Reserve intends to move the federal funds rate into the target range set by the FOMC primarily by adjusting the interest rate it pays on excess reserve balances. 
  • During normalization, the Federal Reserve intends to use an overnight reverse repurchase agreement facility and other supplementary tools as needed to help control the federal funds rate. The Committee will use an overnight reverse repurchase agreement facility only to the extent necessary and will phase it out when it is no longer needed to help control the federal funds rate. 

Let\’s do a quick review of the old monetary policy tools, explain why they no longer work, and then introduce the new monetary policy tools.

In the old days before 2008, monetary policy worked because banks were required to hold reserves with the central bank, but baanks typically tried not to hold extra reserves, because it used to be that money being held with the central bank earned no return. The requirement that banks hold reserves gave the central bank leverage over how much credit banks were willing to offer, and led to what used to be considered the three main tools of monetary policy.

1) The central bank could alter the reserve requirement, which meant that banks had either more or less to lend than before.

2) If a bank reached the end of a business day and found that the last round of transactions had left it a little short of what it needed to be holding, one option was to borrow the additional funds from the central bank. When doing so, the bank would need to pay an interest rate called the discount rate. So by raising or lowering the discount rate, the central bank could influence whether a bank kept a fairly small or fairly large cushion of reserves over and above the required amount–and thus affect the bank\’s willingness to lend.

3) However, banks usually preferred not to borrow from the central bank, because doing so was often viewed as a danger signal that no other private bank was willing to lend to you. Instead, banks that were running a little short of reserves at the end of the day\’s transactions borrowed overnight from other banks and paid the so-called \”federal funds interest rate.\” The Federal Reserve treated this federal fund interest rate as a target for monetary policy. It would buy or sell government bonds to set the federal funds interest rate at a desired level.

For example, here\’s  a figure (from the ever-useful FRED website run by the Federal Reserve Bank of St. Louis) showing the federal funds rate over time. You can tell the history of monetary policy back a half-century by the Fed reducing this federal funds rate in recessions (the gray bars) and increasing it when recessions were over and inflation seemed at least a potential threat.  You can also see how the Fed took this interst rate to near-zero during the Great Recession, and has held it there ever since, which is why there has been a need to use quantitative easing and forward guidance as tools of monetary policy since then.

Notice that all three of these traditional tools of monetary policy  rely in one way or another on banks being fairly close to the level of required reserves. However, as part of its \”quantitative easing\” response to the financial crisis, the Fed started buying large quantities of government debt and mortgage-backed securities. Instead of holding these bonds and securities, banks found that they were holding very large quantities of cash reserves, far above the legally required level. To be specific, U.S. banks were legally required to be holding $101 billion in reserves as of October 29, but they were actually holding $2,557 billion in reserves–about 25 times the required amount.

But when banks are holding large quantities of extra reserves, the old-style monetary policies don\’t work. Change the reserve requirement? Reducing it won\’t matter, and unless the central bank raises it by a multiple of 25, raising doesn\’t matter either. Also, no one wants to try to lock up these excess bank reserves, because the hope is that banks will find ways to lend this money. Altering the discount rate has no effect if banks don\’t need to borrow from the central bank, because there is no danger they will run short of reserves. And banks don\’t need to borrow much from each other at the federal funds interest rate, either, again because they are holding such high levels of reserves. Indeed, the quantity loaned and borrowed at the federal funds interest rate has dropped dramatically in recent years, as shown in this figure from economists at the Federal Reserve Bank of New York.

Ch1_total-federal-funds-sold

The situation of banks holding vastly more reserves than legally required, as a result of the quantitative easing policies of the Fed, seems likely to persist for years to come. So when the Federal Reserve decides that it wishes to raise the federal funds interest rate, what monetary policy tools will work in this situation?

The main monetary policy tool of the future is likely to be the amount of interest that the Fed pays on these bank reserves. Traditionally, the Fed paid zero percent interest on bank reserves. But a law passed in 2006 authorized the Fed to start paying interest on reserves as of 2011, and the Emergency Economic Stabilization Act of 2008 allowed the Fed to start paying interest on bank reserves as of October 1, 2008. The current interest rate paid by the Federal Reserve to banks on their excess reserves is 0.25 percent. By altering this interest rate, the Federal Reserve has a tool that can directly affect how much banks want to hold in reserves and how much they are willing to lend and at what interest rates. For example, say that the Fed gradually raised the interest rate it is paying on bank excess reserves. In that case, banks would be more likely to hold funds in the form of reserves with the central bank and less likely to make loans, including making overnight loans to each other in the federal funds market, which should tend to push up interest rates. In short, when the Fed decides that it\’s time to raise interest rates, the policy tool you should be watching is the interest rate charged on bank reserves.

What if this new policy tool doesn\’t work well? The back-up monetary policy tool, as the Fed said in the quotation above, is the \”overnight reverse repurchase agreement facility,\” which will be used \”only to the extent necessary and will phase it out when it is no longer needed to help control the federal funds rate.\”

Some definition of terms for those uninitiated in repurchase markets may be useful here. In a \”repurchase agreement,\” one party sells a security to another, while simultaneously agreeing to repurchase it at a slightly higher price in the near future–often the next day. For the other party, the one which is agreeing to buy a security and then sell it back the next day, this is called a \”reverse repurchase\” or reverse repo agreement. The Fed is has been testing its ability to operate in the  repurchase market since September 2013, first selling and then buying back Treasury securities. Here\’s a description of reverse repurchase agreements from the Federal Reserve Bank of New York, which would be conducting these operations.

A reverse repurchase agreement, also called a “reverse repo” or “RRP,” is an open market operation in which the Desk [the New York Fed trading desk] sells a security to an eligible RRP counterparty with an agreement to repurchase that same security at a specified price at a specific time in the future. The difference between the sale price and the repurchase price, together with the length of time between the sale and purchase, implies a rate of interest paid by the Federal Reserve on the cash invested by the RRP counterparty.

For policy purposes, the key point here is the last sentence: a repurchase agreement is really a form of very short-term lending and borrowing. Investors who have a lot of cash are always looking for a chance to earn a return, even a small return. When an investor with cash buys a Treasury bond from the Fed, and then sells it back to the Fed the  next day, that investor has received, in effect, an interest payment on their cash. Thus, when the Fed conducts its repurchase operations, it is in effect setting the level of interest rates for very short-term overnight borrowing. Such overnight borrowing using Treasury bonds as collateral is very similar to the overnight borrowing that banks do with each other at the federal funds interest rate.

Jeremy Stein, who left the Federal Reserve Board of Governers in May, spells out how this will work in an interview last July with Ylan Q. Mui in the Washington Post. Basically, the notion is that the interest rate paid on reserves will establish a ceiling for the federal funds interest rate, while the interest rate embedded in the reverse repo facility will set a floor under that rate. But how much space there will be between the ceiling and the floor, and just how these tools will be used in setting monetary policy, is still very much a work in progress, waiting for when the Fed decides it is actually time to raise the federal funds interest rate.

For purely pedagogical reasons, I\’m hoping the use of reverse repos doesn\’t come up too frequently and can largely ignored, because explaining that tool of monetary policy to an intro-level economics class will be just no fun at all.

Why Different Unemployment Measures Tell (Mostly) the Same Story

As the unemployment rate has decreased during the last few years, from its peak of 10% in October 2009 to 5.9% in September 2014, it\’s become common to hear a cynical reaction along the following lines: \”Sure, the official unemployment rate was 5.9% in September 2014, but when you take into account those who have become too discouraged too for a job and those with a part-time job who would prefer full-time work, the real unemployment rate is twice as high at 11.8%.\”

This comment is usually made in a \”gotcha\” tone, with a eyebrow-lifting emphasis on official and real, as if those nefarious government economic statisticians are trying to pull a fast one on the unsuspecting public. But color me unsurprised that if you define \”unemployment\” in different ways, you get a different number. In fact, the Bureau of Labor Statistics has been publishing alternative unemployment rates quite openly since 1976.  Vernon Brundage lays out the distinction in \”Trends in unemployment and other labor market difficulties,\” written as the November 2014 \”Beyond the Numbers\” from the U.S. Bureau of Labor Statistics.

Here are the six measures of unemployment produced by the BLS. U-3 is the official unemployment rate. (Wiggle eyebrows on official as needed.)
  • U‑1: People who are unemployed for 15 weeks or longer as a percent of the civilian labor force.
  • U‑2: Job losers, plus people who completed temporary jobs, as a percent of the civilian labor force.
  • U‑3: Total number of people who are unemployed as a percent of the civilian labor force (official unemployment rate).
  • U‑4: Total number of people who are unemployed, plus discouraged workers, as a percent of the civilian labor force plus discouraged workers.
  • U‑5: Total number of people who are unemployed, plus discouraged workers, plus all other persons marginally attached to the labor force, as a percent of the civilian labor force plus all persons marginally attached to the labor force.
  • U‑6: Total number of people who are unemployed, plus all persons marginally attached to the labor force, plus total employed part time for economic reasons, as a percent of the civilian labor force plus all persons marginally attached to the labor force.

When looking at how any economic statistic has changed over time, it\’s important to compare apples to apples. So if you are interested in the question of how the unemployment rate has changed, it\’s not useful to point out that different definitions of unemployment provide different answers. These six definitions of unemployment typically move pretty much together.

So if you want to focus on a broader measure of unemployment like U-6 that includes those who have become discouraged in looking for work and part-timers who would like full-time work, fair enough: That measure of unemployment was 17.2% in April 2010, and has fallen to 11.8% in September 2014. But pick a measure of unemployment, any measure of unemployment, and the U.S. economy is doing much better now than back in the dark days of 2009 and 2010.

Of course, the relationships between these measures of unemployment do change over time. For exmaple, the U-2 measure of unemployment has usually been higher that U-1, but they have been about the same in the last few years. Brundage explains:

For most of the history of these series, the number of persons unemployed for 15 weeks or longer (the numerator for U‑1) had been less than the number of people who had lost their jobs or completed temporary jobs (the numerator for U‑2), even during economic downturns. … However, the two series began to converge shortly after the end of the recession, largely reflecting a greater increase in the number of people who were unemployed for 15 weeks or longer during the downturn. In December 2007, the number of people who had been unemployed for 15 weeks or longer (2.5 million) was well below the number of job losers (3.9 million). In September 2014, 4.4 million people had been jobless for 15 weeks or longer and 4.5 million had lost their jobs; thus, the U‑1 and U‑2 rates were very similar, at 2.8 and 2.9 percent, respectively.

The other change is that the ratio between the broadly defined U-6 unemployment rate and the U-3 official unemployment rate has risen. The broadly defined U-6 rate adds the official unemployment rate to the part-timers who would prefer a full-time job and to the \”marginally attached\” who have largely given up looking for work. As this figure shows, the number of the marginally attached  has actually not risen too much, but the number of part-timers who would prefer full-time work remains elevated since the end of the recession.

In short, there\’s no conspiracy here by government statisticians to lowball the official unemployment rate. Each measure of unemployment is defined differently. Each one conveys different information about the labor market. And that\’s why the Bureau of Labor Statistics quite openly and transparently publishes six different measures.

Credit Without Banks: Shadow Banking

One of the vivid lessons of the 2007-2009 recession and financial crisis is that in the modern economy, one can\’t just think about the financial sector as made up of banks and the stock market. Other financial institutions can go badly wrong, with dire consequences. The Global Shadow Banking Monitoring Report 2014 from the Financial Stability Board helps give a sense of these other non-banking financial institutions–especially those that sometime act in bank-like ways by receivin funds from investors, lending out those funds, and receiving interest payments.

The Financial Stability Board is an international working group that bubbled up in 2009 in the wake of the financial crisis. As it describes itself: \”The FSB has been established to coordinate at the international level the work of national financial authorities and international standard setting bodies and to develop and promote the implementation of effective regulatory, supervisory and other financial sector policies in the interest of financial stability. It brings together national authorities responsible for financial stability in 24 countries and jurisdictions, international financial institutions, sector-specific international groupings of regulators and supervisors, and committees of central bank experts.\”

The FSB starts with a measure of \”Other Financial Institutions,\” which includes \”Money Market Funds, Finance Companies, Structured Finance Vehicles, Hedge Funds, Other Investment Funds, Broker-Dealers, Real-Estate Investment Trusts and Funds.\” Here\’s a figure showing how these \”other financial institutions\” (the red and blue bars) compare in size with the banking sector (the yellow bars) in a number of countries. In the U.S., for example, the \”other financial institutions\” are bigger than the banking sector.

What\’s up with the huge size of the \”other financial institutions\” in the Netherlands? The report says: \”In the Netherlands, Special Financial Institutions (SFIs) comprise about two-thirds of the OFIs sector and thereby explain most of the size of the shadow banking sector. There are about 14 thousand SFIs, which are typically owned by foreign multinationals who use these entities to attract external funding and facilitate intra-group transactions.\” For a discussion of what some of these Dutch Special Financial Institutions are doing, you can check my post from last June on the Double Irish Dutch Sandwich. 

Obviously, not all of these \”other financial institutions\” are engaged in bank-like activities outside the banking sector. Some are acting in ways that don\’t involve bank-like activities making a loan and expecting an interest payment–for example, they may just be investing in stock of companies or in land, or working with financial derivatives that involve commodity prices or exchange rates or interest rate movements. Some of these are interconnected with banks in ways that means they are covered by the bank regulatory apparatus, So the FSB tries to subtract these activities out, and get an estimate of the \”shadow banking\” sector by itself. For the U.S., shadow banking is estimated at about half of the total of \”other financial institutions,\” at roughly $13 trillion in assets.

I\’ve discussed the concerns with shadow banking on this blog before (for example, here and here). The short story is that we learned long ago that economic instability can interact with instability in the banking sector in a way that causes economic and financial weakness to feed on each other in a vicious circle. This is why pretty much every country in the world has bank deposit insurance (to prevent bank runs) and bank regulation (to prevent banks from taking too much risk). But there are lots of \”other financial institutions\” that receive funds and make loans. This \”shadow banking\” sector too can be part of a vicious circle of economic and financial instability, as we learned from 2007-2009, and since. ere\’s the FSB explanation of what shadow banking iis and why it matters. 

The “shadow banking system” can broadly be described as “credit intermediation involving entities and activities (fully or partially) outside the regular banking system” or non-bank credit intermediation in short. Such intermediation, appropriately conducted, provides a  valuable alternative to bank funding that supports real economic activity. But experience from the crisis demonstrates the capacity for some non-bank entities and transactions to operate on a large scale in ways that create bank-like risks to financial stability (longer-term credit extension based on short-term funding and leverage). …

Like banks, a leveraged and maturity-transforming shadow banking system can be vulnerable to “runs” and generate contagion risk, thereby amplifying systemic risk. Such activity, if unattended, can also heighten procyclicality by accelerating credit supply and asset price increases during surges in confidence, while making precipitate falls in asset prices and credit more likely by creating credit channels vulnerable to sudden loss of confidence. These effects were powerfully revealed in 2007-09 in the dislocation of asset-backed commercial paper (ABCP) markets, the failure of an originate-to-distribute model employing structured investment vehicles (SIVs) and conduits, “runs” on MMFs and a sudden reappraisal of the terms on which securities lending and repos were conducted. But whereas banks are subject to a well-developed system of prudential regulation and other safeguards, the shadow banking system is typically subject to less stringent, or no, oversight arrangements.

One part of the report focuses on the Americas, and here\’s a figure I found thought-provoking. The horizontal axis shows \”other financial institutions\” relative to GDP, while the vertical axis shows the banking sector relative to GDP. At the far upper right is the Cayman Islands, with a very large banking sector and a very large \”other financial institutions\” sector relative to GDP. The other two countries with very large banking sectors relative to GDP are Panama and Canada. To put it another way, Panama has a banking sector like the Cayman Islands, but much less of the \”other financial institutions.\”

To me, the interesting comparison is between the U.S. and Canada–both high-income countries with sophisticated financial sectors. Clearly, the U.S. has a larger share of financial activity happening in the \”other financial institutions\” area, while Canada has a larger share of its financial activity happening explicitly in the banking sector. The Canadian economy is of course closely tied to the U.S economy. But the recession in Canada was milder than in the U.S., perhaps in part because Canada\’s financial sector was less exposed to the issues of shadow banking. Given that the banking sector is far more regulated in both countries, this offers a sort of natural experiment or comparison as the economies of the two countries evolve.

A North American Vision

When talking about the U.S. role and prospects in a globalizing economy, it\’s common to read discussions of issues relating to China, Japan, the European Union, and the \”emerging market\” countries. But perhaps when thinking about the U.S. economic and geopolitical future, a more basic building block should be to establish closer ties across North America. A report from the Council on Foreign Relations argues this case in \”North America: Time for a New Focus\” (Independent Task Force Report No. 71, David H. Petraeus and Robert B. Zoellick, Chairs Shannon K. O’Neil, Project Director). Here\’s a taste of the overall tone:

\”[W]e believe that the time is right for deeper integration and cooperation among the three sovereign states of North America. Here is our vision: three democracies with a total population of almost half a billion people; energy self-sufficiency and even energy exports; integrated infrastructure that fosters interconnected and highly competitive agriculture, resource development, manufacturing, services, and technology industries; a shared, skilled labor force that prospers through investment in human capital; a common natural bounty of air, water, lands, biodiversity, and wildlife and migratory species; close security cooperation on regional threats of all kinds; and, over time, closer cooperation as North Americans on economic, political, security, and environmental topics when dealing with the rest of the world, perhaps focusing first on challenges in our own hemisphere. …  

The people of North America are creating a shared culture. It is not a common culture, because citizens of the United States, Canada, and Mexico are proud of their distinctive identities. Yet when viewed from a global perspective, the similarities in interests and outlooks are pulling North Americans together. The foundation exists for North America to foster a new model of interstate relations among neighbors, both developing and developed democracies. Now is the moment for the United States to break free from old foreign policy biases to recognize that a stronger, more dynamic, resilient continental base will increase U.S. power globally. “Made in North America” can be the label of the newest growth market. U.S. foreign policy—whether drawing on hard, soft, or smart power—needs to start with its own neighborhood.\”

The report stresses four main areas for cooperation: energy, cross-border economic ties, security concerns, and what it calls \”community.\” Here are a few words on each.

North America and Energy

North American already has tied together its energy markets in various ways: \”For many years, virtually all of Canada’s energy exports—including oil, gas, and electricity—went to the United States. … The North American countries are also connected through their electricity grids; this is especially true for the United States and Canada. The Eastern Interconnection grid—encompassing parts of Eastern Canada, New England, and New York—and the Western Interconnection grid—stretching from Manitoba through the U.S. Midwest—are mutually dependent and beneficial configurations. Though the U.S.-Canada electricity trade constitutes less than 2 percent of total U.S.
domestic consumption, the interchanges provide resiliency in case of power overloads or natural disasters. U.S.-Mexico interconnections are more limited, though the two countries are linked in southern California and southwestern Texas.\”

Pipeline connections are happening as well. Natural gas pipelines being built from Texas producers into Mexico. The report advocates construction of the Keystone pipeline from Canada into the United states. More broadly, it notes: \”The construction of North America’s energy infrastructure has delayed oil and gas development. …  North Dakota’s Bakken formation, one of the United States’ largest shale formations, continues to flare nearly one-third of its natural gas because of infrastructure
limitations. North America should build new pipelines and upgrade older ones, both within and among the three countries, to address the bottlenecks. Without adequate pipeline capacity, energy companies have increasingly turned to the rails, roads, and waterways.\”

In Mexico, the big news is that oil production has been falling, which in turn has led the Mexican government to start expressing some openness to foreign investment. \”In contrast, Mexican oil production has fallen nearly 25 percent since 2004 to 2.5 million b/d in 2012. The downturn reflects the declining output at Cantarell—once the world’s second-largest oil field—combined with lower-than-expected production levels in newer fields, such as the Chicontepec Basin. The decline can also be traced to underinvestment, inefficiencies, and limits on technology and expertise at the state-owned energy company Petróleos Mexicanos (Pemex). Nevertheless, Mexico’s energy potential is substantial. The EIA and Advanced Resources International (ARI) estimate that the country has the world’s sixth-largest recoverable shale gas resources and significant tight oil potential. Mexico has now made a historic move: its energy reform of December 2013 will encourage private companies to invest in Mexico’s energy sector for the first time since the 1930s.\”

For decades, I\’ve been reading and hearing and writing about the risks and costs of U.S. dependence on faraway sources of energy in the Middle East. In a global economy, energy markets will inevitably be intertwined. But the U.S. energy picture is being fundamentally reshaped with the growth of U.S. oil and gas drilling. If combines with the broader development of North American energy resources, the economics and international power dynamics of energy production could be transformed.

North American Economic Ties

Economic ties across North America don\’t always get lots of attention, but they are large. \”The United States exports nearly five times as much to Mexico and Canada as it does to China and almost twice as much as to the European Union. Mexico and Canada sell more than 75 percent of their exports within North America.\” Here\’s one figure showing the rise in North American trade, and another showing the rise in foreign direct investment in North America.

As the report notes: \”North America also shares a workforce: companies and corporations now make products and provide services in all three countries. With integrated supply chains, employees in one country depend on the performance of those in another; together, they contribute to the quality and competitiveness of final products that are sold regionally or globally.\” I would add that the economic evidence shows the North American Free Trade Agreement had a modest but clearly positive effect on the U.S. economy.

As I see it, there are two underlying point here. First,  the world economy seems to be organizing itself into regions that rely on global supply chains that cross over between higher-income and lower-income countries. For example, in Asia there are supply chains running from Japan and Korea to Thailand and China. In Europe there are supply chains running from western to eastern Europe. North American has been building its own global supply chains between the U.S., Canada, and Mexico. Second, and more broadly, a U.S. economy that wants to prosper from growth happening elsewhere in the world economy needs to start thinking internationally. Thinking internationally in terms of Mexico and Canada is a start in that direction.

Security Issues

National security issues are not my bailiwick, so I don\’t have much to say here. But I\’ll make the commonplace observation that one often hears concerns about terrorist groups who might be able to ship people or materials into the U.S. by way of Canada or Mexico. There are also concerns about how epidemics might spread, or about how to deal with natural disasters. There are obvious advantages in all of these cases to not just thinking in terms of the U.S. border, but to also think about a sort of border around the continent of North America. Deeper sharing of economic and energy relationship could easily be combined with some coordination of security and other measures. I\’m all in favor of stopping terrorist plots before they cross the U.S border.

Community

This is the catch-all word that the report uses for demographic, travel, and immigration issues. Here is some discussion of the demographic issues:

Compared to the rest of the world, North America enjoys an enviable demographic pyramid: the region’s population is relatively young and fertile. North America benefits from larger families—averaging just over two children per family versus 1.6 in Europe and 1.7 in China—with the advantage coming largely from Mexico’s younger population  and slightly higher birth rates. In fact, Mexico is currently in the middle of its “demographic bonus”—the country’s working-age adults outnumber children and the elderly. By comparison, the United States’ and Canada’s demographics are more mature, but their age pyramids have been tempered by their relatively open immigration policies. The region’s future workforce size—a fundamental factor in calculating future economic growth—also compares favorably, with 22 percent of North Americans below thirty years old, compared to 16 percent in both China and Europe. North America has yet to make the most of its demographic advantages.

Here\’s some discussion of cross-border movement and residence in North America:

\”Some thirty-four million Mexicans and Mexican-Americans and more than three million Canadians and Canadian-Americans live in the United States. Nearly one million U.S. expatriates and a large number of Canadians live, at least part of the year, in Mexico. Another one million to two million U.S. citizens and a growing number of Mexicans live in Canada. Shorter stays are numerous. U.S. citizens choose Mexico for their getaways more than any other foreign locale. Mexicans and Canadians return the favor, comprising the largest groups of tourists entering the United States: a combined thirty-four million visitors each year who contribute an estimated $35 billion to the U.S. economy. Workers, students, and shoppers routinely cross the borders; there were 230 million land border crossings in 2012, or roughly 630,000 a day. Indigenous communities also span the border, with residents frequently crossing back and forth.\”

Finally, there\’s the ever-touchy subject of immigration. I\’ve posted my thoughts on immigration policy before, for example here and here, or the five consecutive posts on immigration policy back in February 2012: here, here, here, here, and here. I won\’t rehearse all the arguments again here, but a couple of points seem worth making.

First, Mexico seems to be evolving from a country that mostly experienced emigration. A couple of years ago, net emigration from Mexico essentially stopped.  Now, Mexico is experiencing a certain degree of immigration–often in the form of former emigrants who are returning. The report says:

As a traditional country of emigration, Mexico’s immigration policies are different from those of its northern neighbors. These dynamics are beginning to change. With roughly 1.4 million former emigrants returning to Mexico between 2005 and 2010, the country can utilize the skills and capital that migrants bring home. Mexico also now faces an inflow of people born abroad—immigrants grew from just under five hundred thousand in 2000 to almost one million in 2010. More than three-quarters of these immigrants were born in the United States; the vast majority are children under the age of fifteen.

For this reason, thinking about the potential for emigration from Mexico to the US in terms of the experiences of the 1980s or the 1990s is likely to be misleading.

In addition, , I find myself wondering if thinking about migration from Mexico in a broad North American context might not offer an alternative approach to the U.S. debate over immigration. Imagine a scenario in which it was relatively easy and legal for people from Mexico, Canada and the United States to work in each other\’s countries, but each county could keep its immigration rules with regard to all other countries in the world. Imagine further that people from Mexico, Canada, and the United States could work in each other\’s countries but would not become citizens of the other country (unless they separately applied to do so) and would not be eligible for income support programs in the other country (unless the host country passed specific laws offering such support).

Of course, this in-between approach to cross-border migration wouldn\’t please either those who want to open the borders or those who want to close them. It requires thinking about freedom of movement across North American countries in a different way than we have traditionally done. But the greater freedom of movement might be useful in offering legal status, short of citizenship, for Mexicans who are already working and living in the United States. And if the freedom of movement was limited to North America, then some of the concerns about opening U.S. borders to the world\’s ultra-poor would be ameliorated. I haven\’t thought through the possibilities of such an arrangement in detail, but as part of thinking about what a true North American geopolitical collaboration might look like, it seems worth pondering.

The Excessive Sameness of Politics and Hotelling\’s Main Street

A lot of politicians may sound like they have differentiated view on the campaign trail, but either during the campaign or after being elected, they seem to become homogenized and squishy in their views. Thus, many voters of all political dispositions are continually frustrated because they feel as if all politicians are discomfortingly alike. Maybe you want to vote for someone who isn\’t a hanging-off-the-ideological-cliff extremist, but you would like to vote for someone with clear and definite views–even if you differ with some of those views. To quote a phrase associated with the Goldwater presidential campaign of 1964, but applicable to all sides of the political spectrum, many voters feel that they want \”a choice, not an echo.\”

A famous long-ago economist named Harold Hotelling proposed a classic explanation for this phenomenon back in a paper called \”Stability in Competition,\” published in the March 1929 issue of the Economic Journal (39:153, pp. 41-57).

In one of his illustrations, Hotelling discussed the of two sellers of a product who are thinking about where to locate along Main Street. For simplicity, imagine that the addresses along the street are numbered from 1-100. The working assumption is that customers are spread evenly along Main Street, and the customers will go to whichever store is located closer to them. In this situation, if one store locates at, say, 10 Main Street, the other store will then choose to at 11 Main Street. The first store will then get all the customers from 0-10, and the second store will get all the customers from 11-100. The first store will then relocate to 12 Main Street, to snag the majority of customers, and the two stores will keep leap-frogging each other and relocating until they end up located side by side, right in the middle of Main Street.

As Hotelling pointed out back in 1929, this clustering is not ideal. From the consumers\’s point of view, it would be more useful to have the two stores located at 25 Main Street and 75 Main Street, because then no consumer along the street from 1 to 100 would be more than 25 away from a store. But the dynamics of competition can lead to excessive clustering.

Hotelling argued that this excessive sameness is apparent in many aspects of public competition, including competition between firms introducing new products, and competition between Republicans and Democrats. He wrote:

\”Buyers are confronted everywhere with an excessive sameness. When a new merchant or manufacturer sets up shop he must not produce something exactly like what is already on the market or he will risk a price war … But there is an incentive to make the new product very much like the old, applying some slight change which will seem an improvement to as many buyers as possible without ever going far in this direction. The tremendous standardisation of our furniture, our houses, our clothing, our automobiles and our education are due in part to the economies of large-scale production, in part·to fashion and imitation. But over and above these forces is the effect we have been discussing, the tendency to make only slight deviations in order to have for the new commodity as many buyers of the old as possible, to get, so to speak, between·one\’s competitors and a mass of customers.

So general is this tendency that it appears in the most diverse fields of competitive activity, even quite apart from what is called economic life. In politics it is strikingly exemplified. The competition for votes between the Republican and Democratic parties does not lead to a clear drawing of issues, an adoption of two strongly contrasted positions between which the voter may choose. Instead, each party strives, to make its platform as much like the other\’s as possible. Any radical departure would lose many votes, even though it might lead to stronger commendation of the party by some who would vote for it anyhow. Each candidate \” pussyfoots,\” replies ambiguously to questions, refuses to take a definite stand in any controversy for fear of losing votes. Real differences, if they ever exist, fade gradually with time though the issues may be as important as ever. The Democratic party, once opposed to protective tariffs, moves gradually to a position almost, but not quite, identical with that of the Republicans. It need have no fear of fanatical free-traders, since they will still prefer it to the Republican party, and its advocacy of a continued high tariff will bring it the money and votes of some intermediate groups.

Of course, it\’s not literally true that Republican and Democratic politicians both locate exactly in the middle of the political spectrum. Hotelling was describing a tendency to push to the middle, but in politics, there is also a need to assure your voters that you share their beliefs. Thus, there\’s a saying that American politics is a battle fought between the 40-yard lines. (For those unfamiliar with the line markers on an American football field, the statement suggests that the political battle is fought between the addresses of 40 and 60 on a Hotelling-style Main Street.) Mainstream politicians thus face a continual dynamic where they seek to reassure their more ardent partisans that they are on their side, while shading and tacking as needed to pick up voters in the middle. At an intuitive level, politicians recognize that offering \”a choice, not an echo\” is part of what led Barry Goldwater to a loss of historic magnitude in the 1964 US presidential election.

Political competition that is usually between centrists, whether right-of-center or left-of-center, does have some benefits. Extremists are much less likely to win high office. And even when the other side wins, it\’s reassuring to think that the person who won is at least closer to the center than the true believers at the extreme of that side. But every now and then, many of us yearn for a few more conviction politicians, who say what they mean and mean what they say, who play a greater role in driving the public debate, and who are OK with the possibility that doing so might end up costing them an election.

[For the record, using  the metaphor of a football field to describe the range of political choice seems to have originated with the 1970 best-seller The Real Majority: An Extraordinary Examination of the American Electorate, by Ben Wattenberg and Richard M. Scammon. But they used the image to discuss how political conflict might sometimes be between those near the middle and sometimes between those at with more extreme positions. The claim that American politics usually  happens between the 40 yard-lines is one of those statements that seems to have evolved afterwards, without a clear single author.]

Is Better Communication Longer and More Complex?

Twenty years ago, when the Federal Reserve Open Market Committee wanted to change interest rates, it didn\’t make any announcement. It just took action, and market participants observe those actions. Mark Wynne of the Federal Reserve Bank of Dallas explains in “A Short History of FOMC Communication”:

The first time the FOMC issued a statement immediately after a meeting explaining what action had been decided was on Feb. 4, 1994. That statement simply noted that the committee decided to “increase slightly the degree of pressure on reserve positions” and that this was “expected to be associated with a small increase in short-term money market interest rates.” By way of explanation for why the committee was announcing its decision, the statement said that this was being done “to avoid any misunderstanding of the committee’s purposes, given the fact that this is the first firming of
reserve market conditions by the committee since early 1989.” In February 1995, the committee decided that all changes in the stance of monetary policy would be announced after the meeting.

But over the years, these announcements of Fed policy have become longer and more complex. Rubén Hernández-Murillo and Hannah Shell of the Federal Reserve Bank of Cleveland have created a vivid figure to show the change in \”The Rising Complexity of the FOMC Statement.\” The colors of the circles show who was leading the Fed at the time: blue for Greenspan, red for Bernanke, and green for Yellen. The area of the circle shows the number of words in the statement: clearly, the statements have been getting wordier over time. And on the vertical axis, the FOMC statements were run through a standard diagnostic tool for determining their \”reading grade level.\” In short, the statement back in the mid-1990s were often pitched at about a 12th grade level. But over time, and especially after the financial crisis hit, the FOMC statements ratcheted up to a \”19th grade\” level, which is to say that they were pitched at readers with post-college graduate study.

This trend raises a question last spotted among the \”advice to the lovelorn\” columnists: Is communication better if it is longer and more complex? As an editor, I confess that I\’m suspicious of length and complexity. A wise economist friend used to point out to me that in academia, specialized terminology always serves two purposes: it streamlines and simplifies communication among specialists, and it shuts out nonspecialists. Of course, all academics like to believe that we are only using specialized terminology for the loftiest of intellectual purposes, not because we are as much of an in-group as an set of gossiping teenagers, with our own slang devised to define membership in the group and to separate ourselves from others.

But even my cynical side remembers a fundamental rule of exposition often attributed to Albert Einstein is that \”Everything should be made as simple as possible, but not simpler.\”

It make sense that as the Federal Reserve statements became longer and more complex as the Fed began to specify a numerical range for its interest rate policies in the late 1990s; and then began to describe how it saw future risks in 1999; and then began to specify how quickly it expected to adjust future monetary policy in 2004; and then began its \”quantitative easing\” policy in 2008.

In December 2012, as Wynne points out, the Fed altered its communication substantially by saying that “this exceptionally low range for the federal funds rate will be appropriate at least as long as the unemployment rate remains above 6½ percent, inflation between one and two years ahead is projected to be no more than a half percentage point above the committee’s 2 percent longer-run goal and longer-term inflation expectations continue to be well anchored.” In other words, the Fed for the first time had announced that it would keep interest rates at a certain level until a certain economic statistic–the unemployment rate–had moved in a certain way. But then when the unemployment rate fell beneath 6.5% in April 2014, this earlier statement had led to expectations that the Fed would then start raising interest rates, which as it turned out, the Fed wasn\’t yet quite ready to do.

How the Federal Reserve and other major central banks carry out their policies has been fundamentally transformed during the last six years. Explaining these changes is important. But if the explanations are pitched at a level that only makes sense to PhD economists, they aren\’t much help. And as the personal advice columnists remind us, if someone is talking and talking but not giving you a straight answer that you can understand, you have some reason to mistrust whether they know their own mind–and even whether they are really trying to tell you the truth. I\’ll give the last word here to Hernández-Murillo and Shell:

As the Fed returns to using conventional monetary policy tools, it is likely that the reading levels of its statements will decline. However, if the Fed continues to use unconventional instruments for a considerable period, it may need to consider how to explain its policy actions in simpler terms to avoid volatility in financial markets.