Winter 2014 Journal of Economic Perspectives is Live!

The Winter 2014 issue of the Journal of Economic Perspectives is now freely available on-line, courtesy of the publisher, the American Economic Association. Indeed, not only this issue but all previous issues back to 1987 are available. (Full disclosure: I\’ve been the Managing Editor since the journal started, so this issue is #107 for me.) I\’ll probably blog about some of these articles in the next week or two. But for now, I\’ll first list the table of contents, and then below will provide abstracts of articles and weblinks.

Symposium: Manufacturing
\”US Manufacturing: Understanding Its Past and Its Potential Future,\” by Martin Neil Baily and Barry P. Bosworth
\”Competing in Advanced Manufacturing: The Need for Improved Growth Models and Policies,\” by Gregory Tassey
\”Management Practices, Relational Contracts, and the Decline of General Motors,\” by Susan Helper and Rebecca Henderson

Symposium: Agriculture
\”Global Biofuels: Key to the Puzzle of Grain Market Behavior,\” by Brian Wright
\”Agricultural Biotechnology: The Promise and Prospects of Genetically Modified Crops,\” by Geoffrey Barrows, Steven Sexton and David Zilberman
\”Agriculture in the Global Economy,\” by Julian M. Alston and Philip G. Pardey
\”American Farms Keep Growing: Size, Productivity, and Policy,\” by Daniel A. Sumner

Articles
\”From Sick Man of Europe to Economic Superstar: Germany\’s Resurgent Economy,\” by Christian Dustmann, Bernd Fitzenberger, Uta Schönberg and Alexandra Spitz-Oener
\”When Ideas Trump Interests: Preferences, Worldviews, and Policy Innovations,\” by Dani Rodrik
\”An Economist\’s Guide to Visualizing Data,\” by Jonathan A. Schwabish

Features
\”Recommendations for Further Reading,\” by Timothy Taylor
\”Correspondence: The One Percent,\”  Robert Solow, N. Gregory Mankiw, Richard V. Burkhauser, and Jeff Larrimore

_________________________________________

And here are the abstracts and links:

Symposium: Manufacturing

US Manufacturing: Understanding Its Past and Its Potential Future
Martin Neil Baily and Barry P. Bosworth
The development of the US manufacturing sector over the last half-century displays two striking and somewhat contradictory features: 1) the growth of real output in the US manufacturing sector, measured by real value added, has equaled or exceeded that of total GDP, keeping the manufacturing share of the economy constant in price-adjusted terms; and 2) there is a long-standing decline in the share of total employment attributable to manufacturing. The persistence of these trends seems inconsistent with stories of a recent or sudden crisis in the US manufacturing sector. After all, as recently as 2010, the United States had the world\’s largest manufacturing sector measured by its valued-added, and while it has now been surpassed by China, the United States remains a very large manufacturer. On the other hand, there are some potential causes for concern. First, though manufacturing\’s output share of GDP has remained stable over 50 years, and manufacturing retains a reputation as a sector of rapid productivity improvements, this is largely due to the spectacular performance of one subsector of manufacturing: computers and electronics. Second, recently there has been a large drop in the absolute level of manufacturing employment that many find alarming. Third, the US manufacturing sector runs an enormous trade deficit, equaling $460 billion in 2012, which is also very concentrated in trade with Asia. Finally, we consider the future evolution of the manufacturing sector and its importance for the US economy. Many of the largest US corporations continue to shift their production facilities overseas. It is important to understand why the United States is not perceived to be an attractive base for their production.
Full-Text Access | Supplementary Materials

Competing in Advanced Manufacturing: The Need for Improved Growth Models and Policies
Gregory Tassey
The United States has underinvested for several decades in a set of productivity-enhancing assets necessary for the long-term health of its manufacturing sector. Conventional characterizations of the process of bringing new advanced manufacturing products to market usually leave out two important elements: One is \”proof-of-concept research\” to establish broad \”technology platforms\” that can then be used as a basis for developing actual products. The second is a technical infrastructure of \”infratechnologies\” that include the analytical tools and standards needed for measuring and classifying the components of the new technology; metrics and methods for determining the adequacy of the multiple performance attributes of the technology; and the interfaces among hardware and software components that must work together for a complex product to perform as specified. If the public–private dynamics are not properly aligned to encourage proof-of-concept research and needed infratechnologies, then promising advances in basic science can easily fall into a \”valley of death\” and fail to evolve into modern advanced manufacturing technologies that are ready for the marketplace. Each major technology has a degree of uniqueness that demands government support sufficiently sophisticated to allow efficient adaptation to the needs of its particular industry, whether semiconductors, pharmaceuticals, computers, communications equipment, medical equipment, or some other technology-based industry.
Full-Text Access | Supplementary Materials

Management Practices, Relational Contracts, and the Decline of General Motors
Susan Helper and Rebecca Henderson
General Motors was once regarded as the best-managed and most successful firm in the world. However, between 1980 and 2009, GM\’s US market share fell from 46 to 20 percent, and in 2009 the firm went bankrupt. We argue that the conventional explanation for this decline—namely high legacy labor and healthcare costs—is seriously incomplete, and that GM\’s share collapsed for many of the same reasons that many highly successful American firms of the 1960s were forced from the market, including a failure to understand the nature of the competition they faced and an inability to respond effectively once they did. We focus particularly on the problems GM encountered in developing the relational contracts essential to modern design and manufacturing, and we discuss a number of possible causes for these difficulties. We suggest that GM\’s experience may have important implications for our understanding of the role of management in the modern, knowledge-based firm and for the potential revival of manufacturing in the United States.
Full-Text Access | Supplementary Materials

Symposium: Agriculture

Global Biofuels: Key to the Puzzle of Grain Market Behavior
Brian Wright
In the last half-decade, sharp jumps in the prices of wheat, rice, and corn, which furnish about two-thirds of the calorie requirements of mankind, have attracted worldwide attention. These price jumps in grains have also revealed the chaotic state of economic analysis of agricultural commodity markets. Economists and scientists have engaged in a blame game, apportioning percentages of responsibility for the price spikes to bewildering lists of factors, which include a surge in meat consumption, idiosyncratic regional droughts and fires, speculative bubbles, a new \”financialization\” of grain markets, the slowdown of global agricultural research spending, jumps in costs of energy, and more. Several observers have claimed to identify a \”perfect storm\” in the grain markets in 2007/2008, a confluence of some of the factors listed above. In fact, the price jumps since 2005 are best explained by the new policies causing a sustained surge in demand for biofuels. The rises in food prices since 2004 have generated huge wealth transfers to global landholders, agricultural input suppliers, and biofuels producers. The losers have been net consumers of food, including large numbers of the world\’s poorest peoples. The cause of this large global redistribution was no perfect storm. Far from being a natural catastrophe, it was the result of new policies to allow and require increased use of grain and oilseed for production of biofuels. Leading this trend were the wealthy countries, initially misinformed about the true global environmental and distributional implications.
Full-Text Access | Supplementary Materials

Agricultural Biotechnology: The Promise and Prospects of Genetically Modified Crops
Geoffrey Barrows, Steven Sexton and David Zilberman
For millennia, humans have modified plant genes in order to develop crops best suited for food, fiber, feed, and energy production. Conventional plant breeding remains inherently random and slow, constrained by the availability of desirable traits in closely related plant species. In contrast, agricultural biotechnology employs the modern tools of genetic engineering to reduce uncertainty and breeding time and to transfer traits from more distantly related plants. Critics express concerns that the technology imposes negative environmental effects and jeopardizes the health of those who consume the \”frankenfoods.\” Supporters emphasize potential gains from boosting output and lowering food prices for consumers. They argue that such gains are achieved contemporaneous with the adoption of farming practices that lower agrochemical use and lessen soil. The extensive experience with agricultural biotechnology since 1996 provides ample evidence with which to test the claims of supporters and opponents and to evaluate the prospects of genetic crop engineering. In this paper, we begin with an overview of the adoption of the first generation of agricultural biotechnology crops. We then look at the evidence on the effects of these crops: on output and prices, on the environment, and on consumer health. Finally, we consider intellectual property issues surrounding this new technology.
Full-Text Access | Supplementary Materials

Agriculture in the Global Economy
Julian M. Alston and Philip G. Pardey
The past 50-100 years have witnessed dramatic changes in agricultural production and productivity, driven to a great extent by public and private investments in agricultural research, with profound implications especially for the world\’s poor. In this article, we first discuss how the high-income countries like the United States represent a declining share of global agricultural output while middle-income countries like China, India, Brazil, and Indonesia represent a rising share. We then look at the differing patterns of agricultural inputs across countries and the divergent productivity paths taken by their agricultural sectors. Next we examine productivity more closely and the evidence that the global rate of agricultural productivity growth is declining—with potentially serious prospects for the price and availability of food for the poorest people in the world. Finally we consider patterns of agricultural research and development efforts.
Full-Text Access | Supplementary Materials

American Farms Keep Growing: Size, Productivity, and Policy
Daniel A. Sumner
Commercial agriculture in the United States is comprised of several hundred thousand farms, and these farms continue to become larger and fewer. The size of commercial farms is sometimes best-measured by sales, in other cases by acreage, and in still other cases by quantity produced of specific commodities, but for many commodities, size has doubled and doubled again in a generation. This article summarizes the economics of commercial agriculture in the United States, focusing on growth in farm size and other changes in size distribution in recent decades. I also consider the relationships between farm size distributions and farm productivity growth and farm subsidy policy.
Full-Text Access | Supplementary Materials

Articles

From Sick Man of Europe to Economic Superstar: Germany\’s Resurgent Economy
Christian Dustmann, Bernd Fitzenberger, Uta Schönberg and Alexandra Spitz-Oener
In the late 1990s and into the early 2000s, Germany was often called \”the sick man of Europe.\” Indeed, Germany\’s economic growth averaged only about 1.2 percent per year from 1998 to 2005, including a recession in 2003, and unemployment rates rose from 9.2 percent in 1998 to 11.1 percent in 2005. Today, after the Great Recession, Germany is described as an \”economic superstar.\” In contrast to most of its European neighbors and the United States, Germany experienced almost no increase in unemployment during the Great Recession, despite a sharp decline in GDP in 2008 and 2009. Germany\’s exports reached an all-time record of $1.738 trillion in 2011, which is roughly equal to half of Germany\’s GDP, or 7.7 percent of world exports. Even the euro crisis seems not to have been able to stop Germany\’s strengthening economy and employment. How did Germany, with the fourth-largest GDP in the world transform itself from \”the sick man of Europe\” to an \”economic superstar\” in less than a decade? We present evidence that the specific governance structure of the German labor market institutions allowed them to react flexibly in a time of extraordinary economic circumstances, and that this distinctive characteristic of its labor market institutions has been the main reason for Germany\’s economic success over the last decade.
Full-Text Access | Supplementary Materials

When Ideas Trump Interests: Preferences, Worldviews, and Policy Innovations
Dani Rodrik
Ideas are strangely absent from modern models of political economy. In most prevailing theories of policy choice, the dominant role is instead played by \”vested interests\”—elites, lobbies, and rent-seeking groups which get their way at the expense of the general public. Any model of political economy in which organized interests do not figure prominently is likely to remain vacuous and incomplete. But it does not follow from this that interests are the ultimate determinant of political outcomes. Here I will challenge the notion that there is a well-defined mapping from \”interests\” to outcomes. This mapping depends on many unstated assumptions about the ideas that political agents have about: 1) what they are maximizing, 2) how the world works, and 3) the set of tools they have at their disposal to further their interests. Importantly, these ideas are subject to both manipulation and innovation, making them part of the political game. There is, in fact, a direct parallel, as I will show, between inventive activity in technology, which economists now routinely make endogenous in their models, and investment in persuasion and policy innovation in the political arena. I focus specifically on models professing to explain economic inefficiency and argue that outcomes in such models are determined as much by the ideas that elites are presumed to have on feasible strategies as by vested interests themselves. A corollary is that new ideas about policy—or policy entrepreneurship—can exert an independent effect on equilibrium outcomes even in the absence of changes in the configuration of political power. I conclude by discussing the sources of new ideas.
Full-Text Access | Supplementary Materials

An Economist\’s Guide to Visualizing Data
Jonathan A. Schwabish
Once upon a time, a picture was worth a thousand words. But with online news, blogs, and social media, a good picture can now be worth so much more. Economists who want to disseminate their research, both inside and outside the seminar room, should invest some time in thinking about how to construct compelling and effective graphics.
Full-Text Access | Supplementary Materials

Features

Recommendations for Further Reading
Timothy Taylor
Full-Text Access | Supplementary Materials

Correspondence: The One Percent
Robert Solow, N. Gregory Mankiw, Richard V. Burkhauser, and Jeff Larrimore
Full-Text Access | Supplementary Materials

Halfway to Full Economic Recovery

Since the Great Recession officially ended about 4 1/2 years ago back in June 2009, the natural question has been: When does the U.S. economy get that jolt of bounceback growth to make up for what was lost? The Congressional Budget Office gives its answer in its just-published report \”The Budget and Economic Outlook: 2014 to 2024:\” \”CBO projects that real GDP will grow notably faster over
the next few years than it has over the past few years. On a fourth-quarter-to-fourth-quarter basis, real GDP is projected to increase by 3.1 percent this year, by 3.4 percent per year in 2015 and 2016, and by 2.7 percent in 2017 … By the second half of 2017, CBO projects, real GDP will return to its average historical relationship with potential (or maximum sustainable) GDP …\”

In short, although the prediction is that the U.S. economy is roughly halfway from the end of the recession to a full economic recovery, this is a case where the glass is actually half-full, rather than half-empty, because the heartier period of economic growth is coming. Here are a few of the details.

Here\’s a figure showing how the Great Recession reduced economic output below its potential, and the CBO projection for bounceback in the next few years.

Household wealth relative to income, which took an enormous hit during the Great Recession from the double-whammy of falling housing prices and a falling stock market, has now moved back to higher levels.

However, business investment hasn\’t only just started its bounceback, and the CBO projections suggest that it will be leading the way in the next few years.

What about the unemployment rate and the labor market? The CBO has also just published \”The Slow Recovery of the Labor Market\” to tackle that subject. The grim fact here is that after the end of the average U.S. recession, the number of jobs takes a couple of quarters to start growing again. But after the  end of the Great Recession, the number of jobs kept falling, and has been slower to recover (as shown by the flatter slope of the lower line in the figure).

Yes, the unemployment rate has fallen from 10% in October 2009 to 6.7% in December 2013, which is painfully slow but still better than a sharp stick in the eye. How much of the remaining unemployment is because of a lack of demand in the economy, and how much is because of \”skill mismatches\”? Here\’s the CBO:

Of the roughly 2 percentage-point net increase in the rate of unemployment between the end of 2007 and the end of 2013, about 1 percentage point was the result of cyclical weakness in the demand for goods and services, and about 1 percentage point arose from structural factors; those factors are chiefly the stigma workers face and the erosion of skills that can stem from long-term unemployment (together worth about one-half of a percentage point of increase in the unemployment rate) and a decrease in the efficiency with which employers are filling vacancies (probably at least in part as a result of mismatches in skills and locations, and also worth about one-half of a percentage point of the increase in the unemployment rate).

But even with the lower overall unemployment rate, the long-term rate of unemployment–that is, the unemployment rate where joblessness has lasted more than 26 weeks–remains historically high.

Also, the share of U.S. workers participating in labor force has declined, which raises the possibility that at least some of them would have preferred to keep working, but became discouraged about their job prospects and gave up. Notice, however, that the decline in labor force participation actually started back around 2000. It was fairly well-known among economists that as the period in which women were pouring into the (paid) labor force came to and end, and as the Baby Boom generation aged, and as a greater share of young people started to attend college, labor force participation rates would tend to drop off.

So the difficult question is how much of the decline in labor force participation is a result of these longer-term trends, and how much is a result of discouraged workers leaving the workforce because of the Great Recession? Here\’s how the CBO answers that question:

Of the roughly 3 percentage-point net decline in the labor force participation rate between the end of 2007 and the end of 2013, about 1½ percentage points was the result of long-term trends (primarily the aging of the population), about 1 percentage point was the result of temporary weakness in employment prospects and wages, and about one-half of a percentage point was attributable to unusual aspects of the slow recovery that led workers to become discouraged and permanently drop out of the labor force.

Economics of Human Sacrifice

Ben Richmond has an interview with Peter Leeson at the online magazine Motherboard, titled \”There\’s a Rational Explanation for Human Sacrifice.\” In the introduction, Richmond points out that Leeson has \”published papers on the medieval European practice of putting rats and vermin up on trial, an African society that poisons chickens to tell the future, and he has an upcoming paper on the practice of auctioning off wives. So ritualized sacrifice fit right in his wheelhouse.\” The academic paper, \”Human Sacrifice,\” was published in the inaugural issue of a new journal, the Review of Behavioral Economics. Here, I\’ll draw on the interview and underlying article. 

Leeson\’s example of human sacrifice is the Kond people of India who lived in the Eastern Ghats mountain range of India in the first half of the 19th century, \”the most significant and well-known society of ritual immolators in the modern era\” When the British encountered this group around 1835, they discovered that human sacrifice was widespread. The Kond population as a whole was several hundred thousand people. There was no central government, but many tribes that included several villages each. The villages often raided each other, stealing cattle, food, and tools.

At least once each year, and often several times, tribes would purchase victims–typically non-Konds. Some tribes would purchase only one victim; others might buy 20 or more. After a wild three-day festival, the victim would be killed in some ceremonial and brutal way that always ended with the victim being torn into pieces. Sometimes the crowd tore apart the victims. In other cases, the victims were drowned in pig\’s blood or beaten to death before being torn into pieces. Then a representative of each village would take a strip of flesh from the victims and take it back to the village, where it was cut into smaller pieces so that everyone had a piece to bury in their field.

According to Kond belief, a victim or meriah had to be purchased. They did not view criminals or prisoners of war as suitable for sacrifice. Also, the price was high. Leeson explains (citations omitted):  \”Konds’ unit of account was an article of such property they called a “life” (or gonti). A life consisted of property such as “a bullock, a buffalo, goat, a pig or fowl, a bag of grain, or a set of brass pots . . . . A hundred lives, on average . . . consist[ing] of ten bullocks, ten buffaloes, ten sacks of corn, ten sets of brass pots, twenty sheep, ten pigs, and thirty fowls.”  Meriah prices were rendered in these units. And their prices were considerable. … [A] single meriah cost a purchasing community “from ten to sixty” lives. This constituted a “very great expense attendant upon procuring the victims” for sacrifice.\”

This human sacrifice practices of the Konds raise many questions (!), but from an economic perspective, Leeson focuses on two: Is there a way it might make economic sense to reduce your own wealth? And if so, is there a reason it might make sense to do so by spending the wealth on human sacrifices, rather than just, say, burning up crops and livestock, giving away land, or destroying tools?

For the first question, Leeson argues that when the risk of conflict is very high, and there is no good way to protect property from attackers, then those who have property may have an incentive to
make themselves worse off. He writes: \”In agricultural societies nature produces variation in land’s output. This variation creates disparities between communities’ wealth. Absent government, wealth disparities induce conflict between communities, as those occupying land that received a relatively unfavorable natural shock seek to plunder those whose expected wealth is higher. If conflict’s cost is sufficiently high, it is cheaper for communities to protect their property rights by destroying part of their wealth. Wealth destruction depresses the expected payoff of plunder and in doing so protects rights in wealth that remains.\” As a close-to-home example, Leeson points out that people who live in high-crime neighborhoods may choose to drive beat-up cars or avoid showing any wealth as a way of making themselves less of a target. Leeson writes: \”The poverty displayed by some well-known groups — from Gypsies to ascetics — may reflect their members’ rational decisions to have more secure property rights in less wealth instead of less secure property rights in more wealth.\”

But if one wishes to destroy wealth, why do so by the method of high-priced purchase of victims for human sacrifice? Leeson suggests that unlike burning crops or some other method, the purchase meant that the sellers of the victims would carry the news of the purchase price far and wide. Thus, it was not possible to fake by destroying only a small amount of a crop. The festival around the sacrifice meant that the destruction was widely seen and acknowledged, and the news would travel broadly, along with being communicated via pieces of the victim to those who had not attended.

I have a hard time seeing Leeson\’s explanation as the only reason behind the Kond practice of human sacrifice. For example, it seems plausible that human sacrifice might also serve as a way of making ferocity acceptable and binding together the group, in societies that were often either attacking others or defending themselves. But an economic-based explanation for human sacrifice need not be the exclusive truth in order to be a productive part of a fuller understanding.

In my mind, perhaps the strongest point in Leeson\’s argument is that when the British were trying to stamp out the Kond practice of human sacrifice, for years they tried violent punishment and they tried reason, without success. However, what did work was when the British offered to provide the tribes with a guarantee of security and a dispute-resolution mechanism–in effect, with a centralized government authority–the Kond tribes were immediately willing to give up their practice of human sacrifice. This pattern certainly suggests that the tribes, at least, viewed the sacrifice as a way of keeping civic order.

Income Mobility

The Census data most often used for studying the distribution of income is a snapshot of income each year. With this data, you can see how many people are in the top and bottom income groups, and all the groups in between, but you can\’t tell whether the same people are in the top or the bottom groups. In other words, the usual data on inequality doesn\’t allow you to look at the mobility of people across the income distribution. The degree of income mobility might matter a great deal to how one thinks about income inequality. For example, if those in the top or bottom groups are often there for a relatively short period of time, or if those from one generation of a family have a better chance over time of ending up in a different part of the income distribution than the previous generation of that family, one might feel differently about the rise in income inequality over the previous few decades.

There has been some evidence on mobility across the income distribution over the last few decades, often using the Panel Study of Income Dynamics,  a dataset that started tracking a nationally representative sample of 5,000 families back in 1968, and has been looking at them year-by-year, along with their descendants and the families of their descendants, since then. Raj Chetty, Nathaniel Hendren, Patrick Kline, Emmanuel Saez, and Nicholas Turner take a whack at this this issue focusing on intergenerational income mobility in \”Is the United States Still a Land of Opportunity? Recent Trends in Intergenerational Mobility?\” The January 2014 paper, along with data and supporting materials, is available here.

Their bottom line is that the amount of intergenerational income mobility isn\’t changing over time. They look at intergenerational income mobility with a variety of calculations, but here\’s one illustrative figure: \”This figure plots the difference in average income percentiles for children born to low vs. high-income parents in each year from 1971-1993. On average, children from the poorest families grow up to be 30 percentiles lower in the income distribution than children from the richest families, a gap that has been stable over time. For children born after 1986, estimates are predictions based on college attendance rates.\”

e_rank_b

Summarizing work in this area, they write: \”Putting together our results with evidence from Hertz (2007) and Lee and Solon (2009) that intergenerational mobility did not change significantly between the 1950 and 1970 birth cohorts, we conclude that rank-based measures of social mobility have remained stable over the second half of the twentieth century in the United States. However, intergenerational mobility is significantly lower in the U.S. than in most other developed countries, especially in some parts of the country such as the Southeast and cities in the Rust Belt.\”

To be clear, saying that intergenerational mobility of incomes hasn\’t changed is quite different from arguing that the distribution of income hasn\’t changed.  As they point out, a lot of the change in the distribution of income–a greater share of income going to the extreme upper end–doesn\’t seem to have much effect on the rate of intergenerational mobility across different parts of the income distribution: \”However, much of the increase in inequality has come from the extreme upper tail (e.g., the top 1%) in recent decades, and top 1% income shares are not strongly associated with mobility across countries or across metro areas within the U.S.\”

Indeed, given that the distribution of income has been widening out, it is actually a little surprising that intergenerational mobility has been fairly stable: after all, if the levels of income distribution are farther apart, moving between those levels would seem to be  harder. The authors use a ladder diagram to illustrate this theme:

But to put this same point another way, winning the \”birth lottery\” now has a bigger effect than it did a few decades ago, because stable intergenerational mobility is not offsetting the rising inequality of the income distribution. Of course,  the \”American Dream,\” that term coined by Pulitzer Prize-winning historian named James Truslow Adams back in 1931, defines the powerful value in America people should have equal opportunity to follow their dreams–in both material income-earning and nonmaterial terms–rather than living in a society calcified by income and class distinctions.

Two points about this research finding are worth emphasizing. First, these authors are a very high-profile and high-powered group, and some of them (Saez in particular) have been strongly associated with arguments about the rising share of income at the top of the income distribution and the desirability of higher marginal tax rates. In other words, this is not a finding that can be dismissed as propaganda from low-level right-wing economists. Second, calculating intergenerational income mobility is a data hard problem. This group has a fairly remarkable dataset, and has made a fairly remarkable effort, but there are sure to be future studies that offer additional subtleties or even contrary findings.

In the details, their result is actually based on four interlocking sets of calculations. As they write: \”For children born during or after 1980, we construct a linked parent-child sample using population tax records spanning 1996-2012. This population-based sample consists of all individuals born between 1980-1993 who are U.S. citizens as of 2013 and are claimed as a dependent on a tax return filed in or after 1996. We link approximately 95% of children in each birth cohort to parents based on dependent claiming, obtaining a sample with 3.7 million children per cohort.\”

But the first graph shows data back to 1971. How were they included? Again, they used tax data, but in this case they needed to used samples of that data. \”We first identify all children between the ages of 12 and 16 claimed as dependents in the 1987-98 SOI [Statistics of Income, the official tax data] cross-sections. We then pool all the SOI cross-sections that give us information for a given birth cohort. For example, the 1971 cohort is comprised of children claimed at age 16 in 1987, while the 1982 cohort is comprised of children claimed at ages 12-16 in 1994-98. The SOI sample grows from 4,331 children in 1971 to 9,936 children in 1982.

But there\’s another problem: their standard measure of mobility across the income distribution is to look at children\’s income at age 30, and then compare it to their parent\’s income. But as they write: \”We cannot measure children’s income at age 30 beyond the 1982 birth cohort because our data end in 2012.\” You\’ll notice that in the figure above, the line includes those born into the early 1990s. For those born from 1983-1986, they look at earnings as of age 26, which they argue are pretty close correlated with earnings at age 30. Then for those born from 1987-1993, they project future income based on rates of college attendance.

In other words, the seemingly simple graph above, along with their other calculations, is based on samples of thousands of children from tax data for 1971-1980. They have tax-based data on 3.7 million children from 1980 to 1993, but they only have income up to age 30 for the first couple of years, then income up to age 26 for another few years, and then projections based on college attendance after that.

Thus, while this study and a several previous studies suggests that intergenerational mobility of incomes hasn\’t shifted much over time, the issue is certain to be revisited as new evidence emerges over time.

Eating Out

One of the subtle, substantial shifts in the American way of life is that people are spending more of their food budget eating away from home. And when they do so, they tend to eat less healthy food. The Economic Research Service of the U.S. Department of Agriculture offers this graph to illustrate the shift in spending on food prepared away from home.

USDA reports: \”Between 1977-78 and 2005-08, U.S. consumption of food prepared away from home increased from 18 to 32 percent of total calories. Meals and snacks based on food prepared away from home contained more calories per eating occasion than those based on at-home food. Away-from-home food was also higher in nutrients that Americans overconsume (such as fat and saturated fat) and lower in nutrients that Americans underconsume (calcium, fiber, and iron).\” They cite a December 2012 report, \”Nutritional Quality of Food Prepared at Home and Away From Home, 1977-2008,\” by Biing-Hwan Lin and Joanne Guthrie. That study finds: \”In the past three decades, FAH [food at home] has changed more in response to dietary guidance, becoming significantly lower in fat content and richer in calcium, whereas FAFH [food away from home] did not.\”

Sure, it\’s possible to overeat dramatically at  home, too. Sometimes people do sit down in front of the television with a family-sized bag of chips or a quart of ice cream. But most people wouldn\’t grill a burger or deep-fry chicken for lunch, not to mention the ubiquitous (and irresistable) french fries and a sugared soda. Most people don\’t go to a restaurant and buy an apple and a bowl of lentil soup, either. The causes of obesity are many and mixed, but it seems plausible that paying others to tempt us with food, rather than spending time ourselves to make food, is part of the pattern.

How Pedestrian Countdown Signals Cause Auto Accidents

Pedestrian countdown signals at crosswalks show how much time is left before the light turns yellow, thus letting pedestrians know if they should rush to cross the street–or perhaps wait for the next light. But when these signals were introduced in Toronto, the rate of rear-end auto accidents was higher at the intersections with pedestrian signals compared to neighboring intersections. Sacha Kapoor and Arvind Magesan tell the story in \”Paging Inspector Sands: The Costs of Public Information,\” which appears in the most recent issue of the American Economic Journal: Economic Policy (6:1, pp. 92–113).  (The title was obscure to me: I\’ll introduce Inspector Sands at the end.)

The story starts a few years back when the city of Toronto decided to change over its existing streetlights to a more energy-efficient variety. Then the city decided that while doing the change-over, it would also install pedestrian countdown signals at the same time. It would start in the places where it was cheapest to retrofit, and then work across the city. This history matter for the economic analysis, because the pedestrian countdown signals were installed for reasons and in an order that had nothing to do with whether an intersection was known to be unsafe or whether previous accidents had occurred. Thus, one can reasonably compare intersections with signals to nearby intersections without, and do so before and after the signals are installed.

\”Our empirical analysis reveals that countdown signals resulted in about a 5 percent increase in collisions per month at the average intersection. The effect corresponds to approximately 21.5 more collisions citywide per month. The data also reveals starkly different effects for collisions involving pedestrians and those involving automobiles only. Specifically, although they reduce the number of pedestrians struck by automobiles, countdowns increase the number of collisions between automobiles. That the total number of collisions increased while collisions involving pedestrians decreased suggests that pedestrian countdown signals had a very significant effect on driver behavior. In fact, we find that collisions rose largely because of an increase in tailgating among drivers, a finding that implies drivers who know exactly when traffic lights will change behave more aggressively.\”

In short, the pedestrian countdown signals were good for pedestrians. But some of the drivers were watching the signals, trying to squeeze through before the light changed, and rear-ending other cars.

There\’s are some narrow lessons here about pedestrian countdown signals and a broader lesson about how information works. Here are two narrow lessons, which come out of a  more detailed analysis of the data: \”The first is cities might benefit from installing countdowns at historically highly dangerous intersections and from not installing them at historically safe intersections. The second conclusion
is that while countdowns can improve safety in historically dangerous cities, they may be detrimental to safety in historically safe ones.\” Also, instead of having a pedestrian countdown signal that is visible to cars, it might make more sense to have a verbal countdown that could only be heard by pedestrians.

The broader lesson is that it\’s common to assume, without a lot of thought, that more information shared more broadly will make everyone better off. But the case of Toronto\’s countdown signals is an example of where making information available only to some (pedestrians) and not to others (drivers) is socially beneficial.

Another example involves the story of Inspector Sands in the title of the article. Kapoor and Magesan write: \”Few know who Inspector Sands is, and no one has ever met him. This is for good reason. Theater companies in the United Kingdom are believed to use the code name “Inspector Sands” in order to alert ushers to pending emergencies, such as fires and bomb threats, without inciting panic among their patrons. When theater  staff learn of a fire, for example, they page Inspector Sands to the fire’s location. When ushers arrive they can put out the fire or help to evacuate the premises in a discrete and orderly manner. By ensuring the threat remains hidden from the public eye, the code name allows ushers to complete the tasks without having to deal with panicked crowds.\” Thus, Inspector Sands is a case, like pedestrian countdown signals, where information is revealed in a limited way to some, because revealing it to all would risk causing harm.

Full disclosure: The AEJ:EP is published by the American Economic Association, which also publishes the Journal of Economic Perspectives, where I work as Managing Editor.

The Status of Microfinance

The United Nations declared 2005 the Year of Microcredit. In 2006, the Nobel Peace Prize was awarded to Mohammad Yunus and the Grameen Bank in Bangladesh. But more recently, the Bangladeshi government pressured Yunus out of the Grameen Bank, has tried to prosecute  him for tax fraud on what looks like frail evidence, and has proposed dismantling Grameen into 19 separate banks. A scandal erupted in India, where a microfinance lender based in Andhra Pradesh was accused of driving over 50 borrowers to commit suicide by shaming and threatening when they could not repay their small loans on schedule. David Roodman sorts out the evidence on the current state of microfinance in  \”Armageddon or Adolescence? Making Sense of Microfinance’s Recent Travails,\”  written as Center for Global Development Policy Paper 35 (January 2014).

The first fact to recognize is that microfinance has expanded a great deal, reaching nearly $80 billion and with a considerable presence all around the world. The number of microloans outstanding topped 90 million in 2010, before declining in 2011 as a result of broad economic turmoil in the world economy and the fact that microfinance dried up in several regions and countries.

Roodman goes into some detail about the financial condition of microfinance institutions. Short story: they often benefit from being able to raise capital very cheaply, through donors or loans to them made at below-market interest rates.  However, with a few notable exceptions like the microfinance institutions in Andhra Pradesh, they are then able to operate in a reasonably self-sufficient way, with repayments of previous loans funding new loans. Moreover, a number of microfinance institutions are migrating away from just giving loans, and are starting to provide a fuller range of financial services to those with low and unstable incomes, including setting up bank deposits and facilitating domestic and international money transfers.

Roodman also reviews the evidence that the benefits of microfinance have been misunderstood and misstated. The common belief is that microfinance helps low-income people start businesses, which can then lift them out of poverty. But the best and most recent economic studies find that while microfinance does help start some businesses, the effect on poverty for those receiving the small loans is negligible. Of course, future studies may come up with different results. But for now, the strongest benefits from microfinance seem to be that it enables those with very low incomes to have greater control over their lives. They can borrow to buy a durable good. They can have a place where their savings are secure, or where they can transfer money. Moreover, from a social point of view, microfinance organizations are developing the organizational and managerial capabilities to operate like standard banks.

The microfinance industry has gotten large enough that it also attracting private-sector capital. In one sense, being able to tap into private-sector financial markets is a sign of the demonstrated strength and viability of the microfinance industry. But it inevitably brings controversy when some people or organizations do well by providing goods and services to the poor. And in some cases, the institutions expanding into microfinance may take advantage of the lack of regulation and consumer protection in these economies, and the lack of sophistication from a number of their borrowers, to act in an predatory and unscrupulous manner. Roodman summarizes some key lessons for the current state of microfinance in this way:  

\”In the sweep of history, countries that are wealthy today have had the most time to learn
hard lessons (and sometimes forget them). In these nations, the lending system includes such actors as retail lenders; investors therein; credit information bureaus; and regulatory bodies that limit and monitor aspects of credit products such as term, term disclosure, even pricing. For institutions that take deposits, additional regulators come knocking—to insure those deposits or ensure that under ordinary circumstances capital is on hand to absorb losses and meet withdrawal demands. A truth often overlooked in excitement about microfinance as a retail service model is that it is no exception to this need for companion institutions. If anything, the need is greater when targeting the poor. …

\”Microfinance has been growing for 35 years and now reaches upwards of 100 million people, who cannot all be wrong in their judgments about the utility of microfinance. Moreover, most of them are served by institutions that are nearly or completely self-sufficient in financial terms; these MFIs [microfinance institutions] do not depend greatly on outside subsidies … Because of the vicissitudes of poverty, poor people need financial services more than the rich. Their financial options will always be inferior—that’s part of being poor—and microfinance offers additional options with distinctive strengths and weaknesses.

\”The microfinance industry has demonstrated an ability to build enduring institutions to
deliver a variety of inherently useful services on a large scale. Nevertheless, the recent travails are signs that something is wrong in the industry. What is wrong is, ironically, what was once so right about the industry: it largely bypassed governments in favor of an experimental, bottom-up approach to institution building. The industry got so good at building institutions and injecting funds into them that it often forgot that a durable financial system consists of more than retail institutions and their investors. The narrow focus became a widening problem as microfinance grew. … To mature, the industry and its supporters should recognize the imbalance it has created. Where possible, they should work to strengthen institutions of moderation such as credit bureaus and regulators. Accepting that such institutions will often be weak, they should err on the side of investing less. In microfinance funding, less is sometimes more.\”

Smoking, 50 Years Later

In 1964, the U.S. Surgeon General famously issued its report that smoking was hazardous to your health. The current Surgeon General, is now out with a report called \”The Health Consequences of 
Smoking —50 Years of Progress.\” Most of the nearly 1,000-page report (think before you hit \”Print All Pages\” on this one!) focuses on health effects of tobacco use. Basic message: Tobacco use is much more hazardous than we thought in 1964, and even more hazardous than we thought 10 or 20 years ago. But round about p. 700 of the report it offers a few chapters on tobacco use and tobacco policy, which is where the economic issues begin to appear explicitly.

As a starting point, here are long-term trends for tobacco consumption in the United States. The first graph shows that per capita consumption of tobacco–that is, total use divided by total population–has fallen from 12 pounds per person per year in the 1950s to about 4 pounds per person per year at present.

This figure shows the share of the adult population that currently smokes cigarettes. More than half of men and about one-third of women smoked in 1964; now, it\’s around 20% for women and a little  higher for men.

Clearly, U.S. tobacco use has dropped a great deal. But as the Surgeon General\’s report reminds us: \”Despite declines in the prevalence of current smoking, the annual burden of smoking-attributable mortality in the United States has remained above 400,000 for more than a decade and currently is estimated to be about 480,000, with millions more living with smoking-related diseases. … Annual smoking-attributable economic costs in the United States estimated for the years 2009–2012 were
between $289–332.5 billion, including $132.5–175.9  billion for direct medical care of adults, $151 billion  for lost productivity due to premature death estimated  from 2005–2009, and $5.6 billion (in 2006) for lost  productivity due to exposure to secondhand smoke.\”

Since the 1964 re[pt, a variety of anti-tobacco policies have been enacted: taxes on cigarettes, lawsuits against tobacco companies, warning labels, anti-smoking media campaigns, limits on advertising cigarettes, support for quitting, and rules that limit exposure to secondhand smoke in public places. What difference has it all made and where do we stand? The January 8, 2014, issue of the Journal of the American Medical Association (JAMA) has a useful set of articles reviewing the evidence and arguments (which can be read on-line with a slightly clunky browser).

In \”Tobacco Control and the Reduction in Smoking-Related Premature Deaths in the United States, 1964-2012,\”  Theodore R. Holford, Rafael Meza, Kenneth E.Warner, Clare Meernik, Jihyoun Jeon, Suresh H. Moolgavkar, David T. Levy take on the task of estimating how much smoking in the U.S. has been reduced as a result of the anti-smoking efforts. They write (and for readability I have deleted (bracketed information about the statistical confidence intervals from this description): \”In 1964-2012, an estimated 17.7 million deaths were related to smoking, an estimated 8.0 million fewer premature smoking-related deaths than what would have occurred under the alternatives and thus associated with tobacco control (5.3 million men and 2.7 million women). This resulted in an estimated 157 million year of life saved, a mean of 19.6 years for each beneficiary (111 million for men, 46 million for women). During this time, estimated life expectancy at age 40 years increased 7.8 years for men and 5.4 years for women, of which tobacco control is associated with 2.3 years (30%) of the increase for men and 1.6 years  (29%) for women.\”

What is the appropriate public policy with regard to tobacco? The Surgeon General\’s report writes: \”This nation must create a society free of tobacco-related death and disease.\” In a note before the report, the Secretary of Health and Human Services Kathleen Sibelius writes: \”I believe that we can make the next generation tobacco-free.\”  I\’m fine with all sorts of anti-tobacco policies, but I confess that I do not find the spirit of prohibition any more attractive when applied to tobacco than when it was applied to alcohol. People eat and drink all sorts of things that can cause ill-health, especially if taken to extremes. People also fail to exercise or to take multivitamins or small amounts of aspirin that would improve their health.  But the usual starting point for economic analysis is that a free society is better off when people make their own choices. There are several potential reasons for reaching a different conclusion.

For example, one possible reason is that people lack information in making their decisions, and so the government should assure that such information is provided. After 50 years of warnings, and drilling the health hazards of smoking into schoolchildren everywhere, I find it difficult to believe that many people are ignorant of the health risks. Indeed, cigarettes were referred to as \”coffin nails\” as far back as the 19th century. Sure, it\’s possible to make the health warnings more explicit, even grotesque, but at some point such efforts stop being about \”information,\” and are essentially propaganda.

Another possible reason for anti-smoking policy is \”externalities\”–that is, smoking imposes costs on others. But when smoking reduces the productivity and wages of a smoker, the smoker bears that cost directly. When smoking shortens life expectancy, the smoker bears that cost directly, too. Indeed, even when smoking causes sicknesses that lead to expenditures on health care costs, the grim truth (as economists and demographers are willing to note off the record), is that shorter life expectancies mean less government spending for programs like Social Security and Medicare. In addition, many of those who die from smoking-induced strokes or heart disease impose relatively low costs on the health care system. The \”externalities\” argument is a strong justification for reducing unwanted exposure to second-hand smoke. But given that we already have taxes on tobacco products that can be viewed as helping to offset the health care costs imposed by these programs, it\’s not clear how much more policy intervention can be justified by this argument.

The final reason for anti-smoking policy is sometimes called \”internalities\”–that is, people would like to quit smoking, but many of them find themselves unable to do so, and so they need some public policy help to avoid imposing costs on themselves. The Surgeon General report states that \”68.9% of current adult daily smokers in that year [2010] were interested in quitting smoking. … In 2012, the overall quit ratio (i.e., the percentage of ever smokers who had quit smoking) among U.S. adults was 55.1%, which means that in that year there were more former smokers than there were current smokers in the United States.\” In this spirit, the panoply of anti-smoking policies can be views as helping people who want to quit–or perhaps would prefer never to start the habit–to find the extra energy and incentive that they need to do so. But the \”internalities\” argument should not be pushed so far as to conclude that everyone who smokes should always wants to quit. Some smokers will prefer to follow Mark Twain\’s old advice, related to  his own prodigious cigar smoking, \”If you can\’t reach 70 by a comfortable road, don\’t go.\”

The evidence on cigarette taxes and the rate of smoking is compelling. The Surgeon General writes (citations omitted): \”In 2012, the federal tax rate was $1.01 per pack and the mean state tax rate was $1.53 per pack. The average price, nationally, for a pack of cigarettes in 2012 was $6.00.\” Here\’s a figure showing the real-inflation adjusted price of a pack of cigarettes, compared with consumption of cigarettes. It\’s intriguing to note that cigarette consumption has fallen as the after-tax price has risen.

The Surgeon General\’s report also discusses the range of other anti-smoking policies. But the report touches only lightly on the most intriguing current method for reducing smoking: that is, electronic cigarettes that provide a dose of nicotine without producing smoke. That January 8, 2014, issue of JAMA includes an article by David B. Abrams called \”Promise and Peril of e-Cigarettes: Can Disruptive Technology Make Cigarettes Obsolete?\” Abrams writes that e-cigarette revenues have doubled each year since 2008, and have now reached $2 billion. There is some preliminary evidence that e-cigarettes might help people to quit smoking altogether, but even if this fails to hold up in further studies, e-cigarettes pose a vastly lower health risk than smoking tobacco, both to the user and to anyone around them. 
Abrams points out that e-cigarettes create a tension between those who believe in \”abstinence\” and those who believe in \”harm reduction.\” My own general view is that while it\’s fine in many famioly and educational contexts to suggest that abstinence would be a sensible individual decision, public policy should focus less on enforcing abstinence and more on offering opportunities for harm reduction. I\’ve never smoked tobacco in any form–cigarette, cigar, pipe–and I have no particular intention of giving it a try.  But those of us who are regular consumers of caffeine, like me, should probably hesitate before we get too strident about those who prefer to consume nicotine. 

Improper Federal Payments of $100 Billion Annually

To its credit, the U.S. Office of Management and Budget keeps a list of \”High-Error Programs,\” which is roughly defined programs that pay out $750 million or more improperly. Here\’s the list for 2012.
In thinking about where the problem is most severe, the last two columns are where to focus. The last column on the right shows what proportion of payments are made in error; the second column on the right shows the amount of the improper payments. Again, these numbers are official estimates from the U.S. government, not wild-eyed claims by opponents of these programs.

Even for a flinty-hearted economist like myself, some of these examples bother me more than others. For example, the school lunch program has a fairly high 15.5% rate of improper payments, but it seems to me unlikely that anyone in school cafeterias across the country is getting rich off these payments. My guess is that many of these improper payments are to children whose families are only borderline ineligible for the programs. And providing food in schools that serve low-income populations is a reasonable policy goal.

Or the Social Security programs that handle Retirement, Survivor\’s and Disability Insurance make the list because there is a low error rate (0.4%) on a very large amount of total spending ($717 billion).

But some of the other categories are more troubling. It\’s troubling that the top three programs on the list all involve health care spending through Medicare and Medicaid, and total $61.9 billion in improper payments. As the US is struggling to implement a new system of health insurance under the Affordable Care Act, with heavy and occasionally capricious government oversight, the table suggest that the federal government is not well-situated to oversee day-to-day medical interactions and decisions.

While I\’m a fan of the Earned Income Tax Credit, the 22.4% rate of improper payments is nonetheless striking and disheartening. As I discussed here, the problem seems to be a mixture of people whose economic and family lives are often in flux and who often have no particular facility for filling out detailed paperwork and records, combined with a complex set of government rules. Throw some opportunistic fraud into the mixture as well, and the overpayment rate gets high.

Once the federal government sends out the checks, the improper payments are rarely recovered. The website states optimistically that recovery of improper payments was up to $4 billion over the previous three years, thanks mostly to efforts in Medicare. But with the improper payments running at $100 billion per year on the government\’s own estimates, this hardly seems a reason to toss the confetti.

U.S. Household Finances Rebound

One signal for whether the U.S. economy is ready for a more robust recovery is the extent to which the financial position of households has rebounded. Here are some illustrative figures, taken from the January 2014 issue of Economic Trends from the Cleveland Fed.

O.Emre Ergungor and Daniel Kolliner write about \”Household Economic Conditions.\” Here\’s a figure showing the movements in household wealth since 2000. Household assets and net worth have now rebounded and surpassed their pre-recession highs.

Part of what\’s happening here is that households have trimmed back on many of their debts. This figure show the change in outstanding debt in various categories over the previous four quarters. During the housing bubble, for example, mortgage debt was growing at more than 10% per year. But household mortgage debt has been contracting (that is, negative growth) since about 2008. The authors write: \”Revolving consumer credit balances plummeted in 2008 and are currently barely higher than their level in the third quarter of 2012. Outstanding home mortgage debt is still contracting due to record write-off s and reduced demand for homes in previous years. Nonrevolving consumer credit, which consists of secured and unsecured credit for student loans, automobiles, durable goods, and other purposes, is the only credit category that shows some sign of life. It is currently 8.5 percent above year-ago levels. Note, however, that the student loan component is entirely driven by federal government loans to students and does not reflect private market activity.\”

The combination of lower household debts and sustained low interest rates means that households are spending less on debt service. They write: \”The financial obligation ratio, which expresses household liabilities, such as credit card payments, mortgage payments, home property taxes, and rent payments, as a percentage of disposable income, is at its lowest level since the third quarter of 1981.\”

A result of these changes is that retail sales and consumption overall, if not yet back to healthy growth rates, are at least solidly back in positive growth after their nosedive during the Great Recession.

 Of course, the unemployment rate remains high, as do the number of long-term unemployed and concerns over whether some workers are not being counted as unemployed because they have become too discouraged to look  for work. But noting that the economy is improving is not to make the claim that it\’s already a bright sunshiny day. One final pattern caught my eye in an article on \”Employee Compensation Costs during the Recovery,\” by Joel Elvery. He points out that the patterns of wages and of benefits have been diverging in recent years.

This figure needs to be interpreted with care, because hourly compensation costs are affected by which workers have jobs. Thus, the rise in wages and salary around 2008 is not because lots of workers saw a big raise, but instead because lower-paid workers were more likely to become unemployed, and so the average wage and salary for those with jobs was  higher as a result. But the overall pattern here is clear enough. Over the last decade, wages and salaries have been pretty flat, but the costs to employers of benefits like retirement and savings accounts, as well as health insurance, have been rising. As I\’ve written before on this blog, health care costs (along with other benefits) have been eating your pay raise.