The 2015 Nobel Prize: Angus Deaton

Economics is a tree with many branches, but consumption patterns and the standard of living more broadly understood are certainly one of the most important. Angus Deaton won the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2015–commonly known as the Nobel Prize in economics–for \”for his analysis of consumption, poverty, and welfare.\” Each year, the committee publishes a number of materials about the award at its websitc, including background papers and interviews. Here, I\’ll focus on two background publications whose names convey their ease of readability: \”Information for the Public: Consumption, great and small\” and \”Scientific Background: Angus Deaton: Consumption, poverty and welfare.\”

Every year I feel a little defensive when trying to explain the intellectual contributions of the winner of the Nobel prize in economics. Non-economists want to know: \”What big discovery did he make or what big question did he solve?\” But professional economists are are more interested in questions like: \”In what ways did he develop the theory and the empirical evidence to increase our understanding of economic behavior and the economy?\” Here, let me start by quoting how the committee answered this broad question in the \”Scientific Background\” paper, and then let me try to disentangle the jargon a bit and offer a few thoughts of my own.

Over the last three to four decades, the study of consumption has progressed enormously. While many scholars have contributed to this progress, Angus Deaton stands out. He has made several fundamental and interconnected contributions that speak directly to the measurement, theory, and empirical analysis of consumption. His main achievements are three.

First, Deaton’s research brought the estimation of demand systems – i.e., the quantitative study of consumption choices across different commodities – to a new level of sophistication and generality. The Almost Ideal Demand System that Deaton and John Muellbauer introduced 35 years ago, and its subsequent extensions, remain in wide use today – in academia as well as in practical policy evaluation.

Second, Deaton’s research on aggregate consumption helped break ground for the microeconometric revolution in the study of consumption and saving over time. He pioneered the analysis of individual dynamic consumption behavior under idiosyncratic uncertainty and liquidity constraints. He devised methods for designing panels from repeated cross-section data, which made it possible to study individual behavior over time, in the absence of true panel data. He clarified why researchers must take aggregation issues seriously to understand total consumption and saving, and later research has indeed largely come to address macroeconomic issues through microeconomic data, as such data has increasingly become available.

Third, Deaton spearheaded the use of household survey data in developing countries, especially data on consumption, to measure living standards and poverty. In so doing, Deaton helped transform development economics from a largely theoretical field based on crude macro data, to a field dominated by empirical research based on high-quality micro data. 

As just one example of how these different ideas come together, consider the problem of learning about consumption levels of households in low-income countries. Until just a few decades ago, it was common for researchers in this area to look at national-level data on patterns of consumption and income, and then divide by population to get an average. Deaton was at the front edge of the group of researchers who pushed for the World Bank to develop the Living Standard Measurement Study, which is a set of detailed country-level surveys that collect detailed data on a nationally representative sample of people in countries around the world. This would be an example of the third point in the committee\’s list above.

But an obvious practical problem has to be addressed in any such survey. If you want to know about how the consumption and savings decisions of households evolve over time–for example, it may take several years for a household to adjust fully to a sharp change in prices or incomes–it seems as if you need to follow the same group of households over time. Economists call this \”panel data,\” But it can be hard to collect panel data because tracking people over years can be hard for survey researchers to do. People move, or households split up, and especially in low-income countries, finding out where they went isn\’t easy.  However, Deaton showed that if you had a series of surveys over time, you would have enough data to say how households with certain groups characteristics reacted over time. He showed that you could work with this data to draw empirical conclusions from a series of surveys of many different individuals to create a \”pseudo-panel\” that would work just as well as actual panel data. This would be an example of the second point made by the committee above.

Another analytical problem is how you combine all the data from the different households to draw overall conclusions about how consumption and saving shift in response to changes in prices or income. When Deaton first started writing about these issues in the 1970s, a common practice was to treat the economy as if it were one giant consumer, reaction to prices and income changes. Perhaps not surprisingly, such calculations didn\’t work well in describing how patterns of demand across goods shifted. Deaton, working with John Muellbauer, developed a more flexible way of looking at patterns of demand for the wide range of goods and services that allowed there to be some flexibility in patterns of household demand (for example, based on the number of people in a household and how many of them were children). It turns out that by allowing this extra flexibility, it becomes possible to draw sensible conclusions about consumption patterns from the data. This is an example of the first point made by the committee above.

Once you have data and a theoretical framework in hand, you can seek out some interesting conclusions about consumption patterns in low-income countries. For example, in one of his papers, Deaton found that low income tends to lead to malnutrition, but that malnutrition doesn\’t seem to be an important factor in causing low incomes. In another paper, he found that household purchases of adult goods like alcohol and tobacco change in the same ways when either a boy or girl is born during normal times, but in adverse times the adult purchases are cut by less when a girl is born–providing evidence that in that setting fewer family resources are committed to raising girls. Another paper found that the costs of expenditures on children are about 30-40% of expenditures on adults–which implies that when comparing countries with higher proportions of children to countries with lower proportions of children, you need to avoid comparisons that just divide economic output by the total number of people. Deaton has been at the center of efforts to use the available data and theory to measure the global level of poverty.

Over the years, Deaton has appeared in the pages of the Journal of Economic Perspectives (where I work as Managing Editor), a number of times. Along with the materials from the Nobel prize committee, these articles would also give the interested reader a sense of Deaton\’s approach as well as his intellectual breadth into areas that didn\’t get much mention from the committee. (As always, all articles from JEP are freely available compliments of the American Economic Association.)

Unpaid Care Work, Women, and GDP

The \”economy\” measures what is bought and sold. Thus, it is standard in introductory economics classes to point out that if my neighbor and I both mow our own lawns, it\’s not part of GDP. But if we hire each other to mow each other\’s lawns, GDP is then higher–even though exactly the same amount of lawn-mowing output was produced. In a broader sense, what\’s would be the economic value of nonmarket family services if they were valued instead in monetary terms?

The McKinsey Global Institute provides some background on this issue in the September 2015 report: The Power of Parity: How Advancing Women\’s Equality Can Add $12 Trillion to Global Growth.\”  The report offers some calculations that if women participated in the paid labor force at the same level as the leading country in their region (thus, not holding those in Latin America, Africa, or the Middle East to the standard of northern Europeans), it would add $12 trillion to GDP. However, the report also notes that these women who are not in the paid labor force are of course already working and producing at least $10 trillion in nonmarket output.

Beyond engaging in labor markets in ways that add to GDP, a large part of women’s
labor goes into unpaid work that is not accounted for as GDP. Women do an average
of 75 percent of the world’s total unpaid care work, including the vital tasks that keep
households functioning, such as child care, caring for the elderly, cooking, and cleaning.
In some regions, such as South Asia (including India) and MENA, women are estimated to undertake as much as 80 to 90 percent of unpaid care work. Even in Western Europe and North America, their share is high at 60 to 70 percent. Time spent in unpaid care work has a strong negative correlation with labor-force participation rates, and the unequal sharing of household responsibilities is a significant barrier to enhancing the role of women in the world economy. Applying conservative estimates based on available data on minimum wages, the unpaid care work of women could be valued at $10 trillion of output per year—an amount that is roughly equivalent to 13 percent of global GDP. In the United States alone, the value of unpaid care work carried out by women is about $1.5 trillion a year. … Data from 27 countries indicate that some 61 percent of unpaid care work (based on a simple average across countries) is routine household work, 14 percent involves taking care of household members, 11 percent is time spent on household purchases, and 10 percent is time spent on travel  …

The amount of unpaid work that women do is closely related to female participation in the paid labor force. The horizontal axis shows the ratio of the labor force participation rate of women to that of men. The vertical axis shows the ratio of time spent on unpaid care by women to the time spent by men. Thus, in India, women spend about 10 times as many hours on unpaid care as men, and their labor force participation rate is one-third as high. In a number of high-income countries, women spend 1.5-2 times as many hours on unpaid work as men, and the labor force participation rate for women is about 80% of the level for men. (For the record, \”unpaid care\” is defined not just as care for other family members, but also includes housework and voluntary community work.) The MGI report notes: \”Globally, women spend three times as many hours in unpaid domestic and care work as men.\”

Some additional background on unpaid work by women is available in \”Unpaid Care Work:
The missing link in the analysis of gender gaps in labour outcomes,\” written by Gaëlle Ferrant, Luca Maria Pesando and Keiko Nowacka for the OECD Development Centre in December 2014.

The two figures shows ratios of time spent on unpaid care by women relative to men: the left-hand figure is across regions; the right-hand figure is across countries divided by income level. The left-hand figure shows that the female-to-male ratio of time spent on unpaid care is nearly 7 in the Middle East and North Africa region and in the South Asia region, but is below 2 in Europe and North America. The right-hand figure shows that the ratio is roughly three in low-income, lower-middle income, and upper-middle income countries, but less than 2 in high-income countries.

The level of unpaid care matters for several reasons. The most obvious, perhaps, is the McKinsey calculation that women moving into the paid labor force could raise world GDP by $12 trillion. But there are a number of more subtle ways in which the inclusion of unpaid work alters one\’s sense of social output. Ferrant, Pesando and Nowacka point out (citations omitted):

\”It leads to misestimating households’ material well-being and societies’ wealth. If included, unpaid care work would constitute 40% of Swiss GDP and would be equivalent to 63% of Indian GDP. It distorts international comparisons of well-being based on GDP per capita because the  underestimation of material well-being would be proportionally higher in those countries where the share of housewives and home-made consumption is higher. For instance, by including Household Satellite Accounts the GDP per capita of Italy reaches from 56% to 79% of the USA’s GDP, and 98% to 120% of that of Spain.\”

In a broader sense, of course, the issue is not to chase GDP, but to focus on the extent to which people around the world are having the opportunity to fulfill their capabilities and to make choices about their lives. Countries where women have more autonomy also tend to be countries where the female-to-male ratios of time spent on unpaid care are not as high. The share of unpaid care provided by women highly correlated with women\’s ability to participate in the paid workforce, as well as to acquire skills and experience that lead to better-paying jobs, as well as participating in other activities like political leadership. Ferrant, Pesando and Nowacka write:

\”The unequal distribution of caring responsibilities between women and men within the household thus also translates into unequal opportunities in terms of time to participate equally in paid activities. Gender inequality in unpaid care work is the missing link in the analysis of related to gender gaps in labour outcomes in three areas: gender gaps in labour force participation rates, quality of employment, and wages.\”

In a similar vein, the MGI report notes: \”Beyond GDP, there could be other positive effects. For instance, more women could be financially independent, and there may be intergenerational benefits for the children of earning mothers. In one study of 24 countries, daughters of working mothers were more likely to be employed, have higher earnings, and hold supervisory roles.\”

What are the pathways by which the time spent on unpaid care activities might be reduced? Many women are familiar with the feeling that a large part of their take-home pay is going to childcare, a housecleaner, lawn care, takeout food when there was no time to cook, and the like. Having people pay each other for what was previously unpaid work will add to GDP, but it may not add to total output broadly understood.

Thus, the challenge is reduce unpaid work in ways that don\’t just swap unpaid work around, but actually free up time and energy. Both the McKinsey report and the OECD authors have similar comments about how this has happened in practice. Historically, one major change for high-income countries has been the arrival of labor-saving inventions. The MGI report notes:

Some of the routine household work and travel time can be eliminated through better public services and greater automation. For example, in developing countries, the time spent on household chores is increased by poor public infrastructure. Providing access to clean water in homes can reduce the time it takes to collect water, while electricity or solar power can eliminate the time spent hunting for firewood. Tools such as washing machines and kitchen appliances long ago lightened much of the drudgery associated with household work in higher-income countries, and millions of newly prosperous households in emerging economies are now adopting them, too. Innovations such as home-cleaning robots may one day make a leap forward in automating or streamlining many more tasks.

A number of other issues that affect the balance between unpaid and paid labor: the prevalence of workplace policies like family leave and flex-time; the availability of high-quality child care and elder care; the length of school days along with preschool and post-school programs; and the extent to which money that women earn in the paid workforce is reduced by government tax, or by the withdrawal of transfers that would otherwise have been available.  And of course, social attitudes about the role of women are central to these outcomes.

The MGI report gives a sense of how these forces have evolved in the US economy in recent decades:

In the United States, for example, labor-force participation by women of prime
working age rose from 44 percent in 1965 to 74 percent in 2010. Over this period, the time women spent on housework was cut almost in half, but the hours they spent on child care actually rose by 30 percent, reflecting evolving personal and familial choices. Both housework and child care became more equitably shared. Men’s share of housework rose from 14 percent in 1965 to 38 percent in 2010, and their share of child care from 20 percent to 34 percent.

The ultimate constraint which rules us all is that a seven-day week has 168 hours. A gradual reduction in the time spent on unpaid care activities, which have traditionally been primarily the job of women, is part of what makes society better-off.

The Eurozone Crisis: Crystalizing the Narrative

The economy of the eurozone makes the US economy look like a picture of robust health by comparison. Richard Baldwin and Francesco Giavazzi have edited The Eurozone CrisisA Consensus View of the Causesand a Few Possible Solutionsa VoxEU.org book from the Centre for Economic Policy Research in London. The book includes a useful introduction from Richard Baldwin and Francesco Giavazzi, followed by 14 mostly short and all quite readable essays.

As a starting point for non-European readers, consider that the unemployment rate across the 19 eurozone countries is still well into double-digits at 11%.

While the US economy has experienced disappointingly sluggish growth after the end of the Great Recession in 2009, the eurozone economy experienced a follow-up recession through pretty much all of 2011 and 2011, and since then has experienced growth that is sluggish even by US standards.

What went wrong in the eurozone? Here\’s the capsule summary from the introduction by Baldwin and Giavezzi:

The core reality behind virtual every crisis is the rapid unwinding of economic imbalances. … In the case of the EZ [eurozone] crisis, the imbalances were extremely unoriginal. They were the standard culprits that have been responsible for economic crises since time immemorial – namely, too much public and private debt borrowed from abroad. Too much, that is to say, in relation to the productive investment financed through the borrowing. 

From the euro’s launch and up until the crisis, there were big capital flows from EZ core nations like Germany, France, and the Netherland to EZ periphery nations like Ireland, Portugal, Spain and Greece. A major slice of these were invested in non-traded sectors – housing and government services/consumption. This meant assets were not being created to help pay off in the investment. It also tended to drive up wages and costs in a way that harmed the competitiveness of the receivers’ export earnings, thus encouraging further worsening of their current accounts. 

When the EZ crisis began – triggered ultimately by the Global Crisis – cross-border capital inflows stopped. This ‘sudden stop’ in investment financing raised concerns about the viability of banks and, in the case of Greece, even governments themselves. The close links between EZ banks and national governments provided the multiplier that made the crisis systemic. 

Importantly, the EZ crisis should not be thought of as a sovereign debt crisis. The nations that ended up with bailouts were not those with the highest debt-to-GDP ratios. Belgium and Italy sailed into the crisis with public debts of about 100% of GDP and yet did not end up with IMF programmes, while Ireland and Spain, with ratios of just 40%, (admittedly kept artificially low by large tax revenues associated with the real estate bubble) needed bailouts. The key was foreign borrowing. Many of the nations that ran current account deficits – and thus were relying of foreign lending – suffered; none of those running current account surpluses were hit.

In working through their detailed explanation, here\’s are a few of the points that jumps out at me. When the euro first came into widespread use in the early 2000s, interest rates fell throughout the eurozone and all the eurozone countries were able to borrow at the same rate; that is, investors were treating all governments borrowing in euros as having the same level of risk–Germany the same as Greece. Here\’s a figure showing the falling costs of government borrowing and the convergence of interest rates across countries.

The crucial patterns of borrowing that emerged were not about lending from outside Europe to inside Europe, but instead about lending between the countries of the eurozone, a pattern which strongly suggests that the common currency was at a level generating ongoing trade surpluses and capital outflows from some countries, with corresponding trade deficits and capital inflows for other countries. Baldwin and Giavezzi write:

To interpret the individual current accounts, we must depart from an essential fact: The Eurozone’s current account as a whole was in balance before the crisis and remained close to balance throughout. Thus there was very little net lending from the rest of the world to EZ countries. Unlike in the US and UK, the global savings glut was not the main source of foreign borrowing – it was lending and borrowing among members of the Eurozone. For example, Germany’s large current account surpluses and the crisis countries deficits mean that German investors were, on net, lending to the crisis-hit nations – Greece, Ireland, Portugal and Spain (GIPS).

Sitting here in 2015, it seems implausible that policymakers around Europe weren\’t watching these emerging imbalances with care and attention, and planning ahead for what actions could be taken with regard to government debt, private debt, banking reform, central bank lender-of-last-resort policy, and other issues. But of course, it\’s not uncommon for governments to ignore potential risks, and only make changes after a catastrophe has occurred.   Baldwin and Giavezzi note wryly:

It is, ex post, surprising that the building fragilities went unnoticed. In a sense, this was the counterpart of US authorities not realising the toxicity of the rising pile of subprime housing loans. Till 2007, the Eurozone was widely judged as somewhere between a good thing and a great thing.

And what has happened in the eurozone really is an economic catastrophe. Baldwin and Giavezzi conclude:

The consequences were and still are dreadful. Europe’s lingering economic malaise is not just a slow recovery. Mainstream forecasts predict that hundreds of millions of Europeans will miss out on the opportunities that past generations took for granted. The crisis-burden falls hardest on Europe’s youth whose lifetime earning-profiles have already suffered. Money, however, is not the main issue. This is no longer just an economic crisis. The economic hardship has fuelled populism and political extremism. In a setting that is more unstable than any time since the 1930s, nationalistic, anti-European rhetoric is becoming mainstream. Political parties argue for breaking up the Eurozone and the EU. It is not inconceivable that far-right or far-left populist parties could soon hold or share power in several EU nations. Many influential observers recognise the bind in which Europe finds itself. A broad gamut of useful solutions have been suggested. Yet existing rules, institutions and political bargains prevent effective action. Policymakers seem to have painted themselves into a corner.

For those looking for additional background on eurozone issues, here are links to a few previous posts on the subject:

Update on US Unions

For those looking for an update on the modern role of American unions, the Council of Economic Advisers offers a useful starting point with its October 2015 \”Issue Brief: Worker Voice in a Time of Rising Inequality.\”

The report begins with some graphs about US union membership that, while their shape is familiar to me, have not lost their power to shock. For example, US union membership peaked back in the 1950s and 1960s, and has been on a downward path as a share of total US employment since then. (The two different colored lines show that two different sources of data are being used.)

The share of US private sector workers belonging to unions is even lower, while public sector unionization rates are much higher.

What effects do unions have in modern labor markets and job conditions? The report notes: \”Unionized workers still command a sizable wage premium of up to 25 percent relative to similar nonunionized workers, but that premium has fallen slightly over the past couple of decades. … After controlling for observable differences between union and nonunion workers, research finds that workers who are represented by a union are about 30 percent more likely to be covered by health insurance at their job than similar nonunion workers. In addition, union workers are about 25 percent more likely to have retiree health benefits than similar nonunion worker …\”

The reason for this wage premium is a subject of some controversy. One possibility is that unionized workers with firms that have large and ongoing profits can negotiate in a way that gives the workers a larger share of those profit–which is a model that fits fairly well with the auto and steel unions of a half-century ago. Otherwise, union workers get paid more because they have higher productivity. This could happen in various ways: less worker turnover, better training, and the like. It could happen because even among workers who look the same based on observable characteristics in the data, the more motivated and productive workers are more likely to join a union. The possibility barely mentioned in this report is that employers react to a union by investing more heavily in capital equipment and in outsourcing to non-union firms, and increasing the pay of union workers in that way but reducing the number of union jobs over time.

Although this report has lots of useful information and embeds links to a number of relevant academic studies, reports from the Council of Economic Advisers are inevitably both economic and political documents. The main place the influence of politic shows up here is in the very brief (four sentences, one paragraph) discussion of \”Union Impact on Firm Performance.\” You will be relieved, as I was, to read that unions always have a positive effect on firms. There was apparently no need to mention the possibility that unions might create inflexibilities: for example, in the size of the compensation bill, in a burden of retiree benefits, in job assignments, in limits on incentive pay, or in adapting new organizational or technological approaches.  From the discussion in this report, unionized workforces bring nothing but benefits to firms, and so it would seem that every US employer should be lining up to unionize its workforce. I do think one can make a strong case that unions have bolstered productivity and benefited firms at certain companies and in certain countries, at various places and times. But the possibility that unions at other times and places have had negative effects on firms is not not discussed here.

But on a number of other subjects, the report gives a more balanced overview. For example, on the question of whether unions improve job safety, the report points out that while one might be predisposed to believe this claim, in fact unionized workplaces seem to suffer more injuries. There are a number of possible reasons for this pattern. Perhaps workplaces with a lot of injuries are more likely to end up with unions. Perhaps unionized workplaces are more likely to keep a true record of workplace injuries. Perhaps the main gain of unions is not primarily a smaller number of workplace injuries, but that the injuries which occur are milder and less harmful. Work on sorting out these issues is still underway.

On the question of whether unions lead to improved skills and training, the report cites strong evidence from a number of other countries that this connection exists. But for the US, the report notes: \”Other work examining unionized workers in the United States and Canada also finds that unionized workers tend to develop skills that are relatively job-specific. However, some work suggests little to no difference in training between union and nonunion firms, while other research suggests that employers in unionized workplaces offer less training than those in nonunionized workplaces.\”

Finally, there is the issue of  how much the decline in unionization has contributed to the rise in inequality. The graph shows the basic observation suggesting a connection here: the bottom 90% of the income distribution had a higher share of total income when unionization rates were at their highest, and the share of income going to that group has declined as unionization rates have fallen.


Of course, this correlation doesn\’t prove causation. It\’s easy to imagine that certain factors in the economy–say, the rise in information and communication technology together with an increase in global trade–might account for both the fall in unionization and rising inequality of the income distribution.  But more detailed studies that try to take these other factors into account do suggest that the decline in unionization is part of the story, too: \”A recent paper finds that declines in unionization explain one-fifth of the increase in wage inequality between 1973 and 2007 for women and one-third of the increase for men. For men, the effect is comparable to the effect of the increasing stratification of wages by education (for women, the effect of deunionization is about half that of education).\”

However, give the changes in who is a union member over time, it\’s not clear that a rise in unionization right now would have a substantial effect on inequality. Today\’s union members are less likely to be low-wage workers; indeed, union workers are now more likely to be college-educated than the average US worker.

The report notes:

\”Union membership has also become more representative of the population, with the share of members who are female or college-educated rising quickly. Studies have shown that union wage effects are largest for workers with low levels of observed skills and that unionization can reduce wage inequality among workers partially by increasing wages at the bottom of the distribution and by reducing pay dispersion within unionized firms and industries. Since both the union wage premium and the coverage of low-skilled workers, who receive the highest wage premium, have fallen, unionization’s ability to reduce inequality has very likely been limited in recent years.\”

During the last few decades, I\’ve seen occasional articles about how US unions are just about to make a comeback. Maybe they will, although after decades of ongoing decline I\’m skeptical. But the report also points to other organizations that can offer a voice for workers. For example, Germany is well-known for its \”works councils\”:

Works councils, groups of workers that represent all employees in discussions with their employer but are not part of a formal trade union, are a common form of worker voice outside of trade unions in Germany and, under the authority of the German Works Constitution Act of 1952, can be set up in any private workplace with at least five employees. Works councils ensure that workplace decisions, such as those about pay, hiring, and hours, involve workers – they have both participation rights (where works councils must be informed and consulted about certain issues) and co-determination rights (where the works council must be involved in the decision). Works councils are separate from trade unions: trade unions exist to protect their members, while works councils exist to integrate workers with management into the decision making process.

In the US, there are some organizations forming that represent independent contractors and freelancers.

A similar organization for workers not covered by the National Labor Relations Act, and that operates closely with the organized labor community, is the New York Taxi Workers’ Alliance. Formed in 1998, the New York Taxi Workers’ Alliance helps advocate for taxi drivers, who are primarily independent contractors rather than employees. The organization expanded nationally in 2011 as the National Taxi Workers’ Alliance, and became the first charter for non-traditional workers since the farm workers in the 1960s, and the first one ever of independent contractors; they are recognized by the AFL-CIO as an affiliate organization. The group advocates for its members in much the same way as a traditional union, but their right to collectively bargain is not protected under the National Labor Relations Act due to their non-employee status.

I don\’t know if these kinds of non-union worker-voice organizations have a real future in the US context. But I do think that worker voice offers potentially valuable input, and that many workers want a voice that can speak on their behalf to management. If workers don\’t have workplace-related organizations to provide such a voice, they will inevitably turn their voices to politicians who promise legislation to address their concerns.

Rethinking Parameters of the US Welfare State

Here\’s a three-question true-or-false quiz about your beliefs on the welfare state:

  1. True or false: \”The current American welfare state is unusually small.\” 
  2. True or false: \”The United States has always been a welfare state laggard.\” 
  3. True or false: \”The welfare state undermines productivity and economic growth.\”

My guess is that a lot of US economists would agree with at least two of these statements. Irwin Garfinkel and Timothy Smeeding challenge all three in their essay, \”Welfare State Myths and Measurement,\” which appears in Capitalism and Society (volume 10, issue 1). They write: \”Very reasonable changes in measurement reveal that all three beliefs are untrue.\”

While one can certainly quarrel in various ways with the \”reasonable changes\” they propose, the mental exercise involved in doing so is a nice chance to take a big-picture look at the US \”welfare state\” in a variety of contexts. When Garfinkel and Smeeding refer to the \”welfare state,\” they are not talking about the extent of income redistribution. Instead, they are talking more broadly about the extent to which the government takes on the provision of a range of social benefits including retirement income, health care, and education spending.

So what is the argument that the US welfare state is not \”unusually small\”? Their answer comes in three parts. First, the usual pattern in the world economy is that countries with higher per capita income devote a higher percentage of GDP to social welfare spending. Here\’s a graph with 162 countries. The high-income countries, like the US, are the solid dots. From this image, their similarities look a lot larger than their differences.

Garfinkel and Smeeding write: \”Clearly, the richer the country, the greater the share of their income that citizens devote to welfare state transfers. (Three countries—Hong Kong, Singapore, and the United Arab Emirates—are outliers: very rich with relatively small welfare states. These exceptional nations have not been included in previous research on welfare states in rich nations and we leave it to future scholars to explain their exceptionalism.) The same pattern holds within the United States
and Europe. The higher the income of states or countries, the greater the share of income that they devote to welfare state transfers (Chernick, 1998). Most important, in the international context of all nations, the size of the US welfare state does not stick out.\”

One might object that the figure is visually misleading, because the horizontal and vertical axes are expressed in log form, which can make differences that are large in the real world look smaller on a figure (at least for those not used to reading log graphs).  For example, on the vertical axis, 3 is the log level for 20% of GDP, 3.4 is the log level for 30% of GDP going to social welfare spending.

Garfinkel and Smeeding make two other points about the size of the US welfare state. They point out that while many other countries rely heavily on government funding for retirement and health spending, the US relies more heavily on an employer-provided system. They essentially argue that it doesn\’t make sense to treat a system where the government taxes and spends as a \”welfare state,\” but a system where government gives large tax incentives and regulates heavily as \”not a welfare state.\” Here\’s their argument (citations omitted):

 \”Should including tax expenditures or employer-provided benefits in the measurement of welfare states be controversial? Again, we argue no. Most economists treat tax expenditures as economically equivalent to explicit budget expenditures and would therefore agree that, at a minimum, the tax-subsidized portion of employer-provided health insurance (between one-fifth and one-quarter of the total) should be included as welfare state expenditures. Although a case can be made for counting only the tax-subsidized portion on the grounds that state funding differs from funding stimulated
and regulated by the state, some economists and political scientists—whose practice and rationale we follow—argue for including the entire amount of employer expenditures. These benefits are publicly subsidized and regulated. Moreover, employer-provided health insurance involves socialization of the risk of ill health and redistribution from the healthy to the sick. While this occurs at the firm rather than the national level, failing to include these benefits underestimates the share of the population with insurance and mischaracterizes the US welfare state by obscuring and minimizing how much it spends on subsidized health insurance. Most important, including the full expenditures for employer-provided health insurance comes much closer than including only the tax-subsidized portion to measuring the real social cost of the American system of health insurance, that is to say the peculiar American welfare state version of health insurance—a staggering 17 percent of GDP!\”

Finally, Garfinkel and Smeeding point out that saying the US has a small welfare state is based on statements about the size of welfare relative to GDP, but that if one looks just at the absolute size of the US welfare state on a per person basis, it is enormous.

\”The United States, as one of the richest nations, could be spending more in absolute terms and less as a percentage of income than other rich nations. … Australia, for example, spent a slightly larger proportion of its GDP on SWE [social welfare expenditures] in 2001 than the US … but its [per capita] GDP then was only a bit above 60% of US GDP. Consequently, US per capita social welfare expenditures are much higher than Australia’s. … Real per capita social welfare spending in the United States is larger than that in almost all other countries! Even if employer-provided benefits and tax expenditures are excluded, the United States is still the third biggest spender on a per capita basis.\”

The second question in the mini-quiz above is whether the US has always been a laggard in welfare state spending. Garfinkel and Smeeding point out that the US was historically a global leader in one kind of social spending: \”throughout most of the 19th and 20th centuries the United States was a leader in the provision of mass public education.\” However, they also point out that in recent decades, other high-income countries have caught up (as I\’ve also noted herehere, and here, for example). Here\’s their summary (again, citations omitted):

The wide American lead in secondary education persisted past mid-century, at least until 1970, but by century’s end was much reduced. Most of the other countries had caught up fully or nearly, and Canada and Ireland were notably ahead. During the last quarter of the twentieth century, the United States also fell increasingly behind in early childhood education and in higher education. … [T]he United States is now fast becoming a laggard in postsecondary degree completion. For the current 55–64 age cohort, the United States was the leader in post-secondary degrees of all kinds. But subsequent cohorts showed little if any gains in post secondary educational attainment, while several nations not only overtook, but now lead the United States in post-secondary educational attainment and others are rapidly catching up.

The third question in the mini-quiz was about whether the welfare state undermines productivity and growth. Garfinkel and Smeeding point out that in the big picture, all the high-income and high-productivity nations of the world have large welfare states; indeed, one can argue that growth rates for many high-income nations were higher in the mid- and late 20th century, when the welfare state was comparatively larger, than back in the 19th century when the welfare state was smaller. Indeed, improved levels of education and health are widely recognized as important components of improved productivity. As they write: \”Furthermore, by reducing economic insecurity, social insurance and safety nets make people more willing to take economic risks.\”

One can make any number of arguments for improving the design of the various aspects of the welfare state, or to point to certain countries where aspects of the welfare state became overbearing or dysfunctional. But from a big-picture viewpoint, it\’s hard to make the case that a large welfare state has been a drag on growth. Garfinkel and Smeeding write:

\”Of course, many other factors besides social welfare spending have changed in the past 150 years. But, as we have seen, welfare state spending is now very large relative to the total production of goods and services in all advanced industrialized nations. If such spending had large adverse effects, it is doubtful that growth rates would have been so large in the last 30 years. The crude historical relationship suggests, at a minimum, no great ill effects and, more likely, a positive effect. The burden of proof clearly lies on the side of those who claim that welfare state programs are strangling productivity and growth. If they are right, they need to explain not only why all rich nations have large welfare states, but more importantly why growth rates have grown in most rich nations as their welfare states have grown larger.\”

Mass Shootings: Trends and Categories

The vile and ghastly news of the mass shooting last week at Umpqua Community College in Roseburg, Oregon, sent me to the July 2015 report \”Mass Murder with Firearms: Incidents and Victims, 1999-2013,\” written by William J. Krouse and Daniel J. Richardson for the Congressional Research Service.

The most-used definition of a \”mass shooting\” is that four or more people are killed in a single incident. Here\’s the pattern from 1999-2013. 

Mass shootings can be divided into three categories: mass public shootings, \”familicide\” shootings that involve a group of family members, and felony mass shootings that would include gang executions, botched robberies and hold-ups,and the like. The chart above includes all three of these. The breakdown in the three categories is below. Mass public shootings have about half of the incidents of each of the other two categories, but include almost as many victims killed and more victims injured. 
What if we just look at the pattern for mass public shootings over time, leaving out the familicides and other felony mass shootings? Here\’s the pattern. If you squint a bit at 2007, 2009 and 2012, you can sort of imagine an upward trend here, but given that there\’s a lot of annual fluctuation, it\’s not clear that the trend is a meaningful one over this time frame. 
However, if one takes a longer time-frame going back several decades, it does appear that mass public shootings have risen. Krouse and Richardson write:

With data provided by criminologist Grant Duwe, CRS [the Congressional Research Service] also compiled a 44-year (1970-2013) dataset of firearms-related mass murders that could arguably be characterized as “mass public shootings.” These data show that there were on average:

• one (1.1) incident per year during the 1970s (5.5 victims murdered, 2.0 wounded per incident),
• nearly three (2.7) incidents per year during the 1980s (6.1 victims murdered, 5.3 wounded per incident),
• four (4.0) incidents per year during the 1990s (5.6 victims murdered, 5.5 wounded per incident),
• four (4.1) incidents per year during the 2000s (6.4 victims murdered, 4.0 wounded per incident), and
• four (4.5) incidents per year from 2010 through 2013 (7.4 victims murdered, 6.3
wounded per incident).

These decade-long averages suggest that the prevalence, if not the deadliness, of “mass public shootings” increased in the 1970s and 1980s, and continued to increase, but not as steeply, during the 1990s, 2000s, and first four years of the 2010s.

As another way of illustrating this longer-term upward pattern of mass public shootings, consider only mass public shootings in which 10 or more people were killed. Up through 2013, there had been 13 such episodes in modern US history: seven occurring from 2007-2013, and six during the 34 years from 1966 to 2006. 

It feels almost mandatory for me to tack on some policy recommendation at the end of this post, but I\’ll resist. Some of the newly dead in Oregon are not even in their graves yet. There have been 4-5 mass public shootings each year for the last quarter-century.The US population is 320 million.  In an empirical social-science sense, it\’s probably impossible to prove that any particular policy would reduce this total from 4-5 per year back to the 1-3 mass public shootings per year that happened in the 1970s and 1980s. In such a situation, policy proposals (whether the proposal is to react in certain ways or not to react in certain ways) will inevitably be based on a mixture of grief, outrage, preconceived beliefs, and hope, not on evidence.

An Interview with Amy Finkelstein: Health Insurance, Adverse Selection, and More

Douglas Clement has an \”Interview with Amy Finkelstein\” in the September 2015 issue of The Region, which is published by the Minneapolis Federal Reserve. Finkelstein has done a lot of her most prominent work looking at issues of insurance and risk: especially health insurance, but also long-term care insurance, annuities, and others. She\’s a theory-tester: that is, an empirical researcher who works with a keen awareness of what the previous accepted underlying theories might seem to imply. Back in 2012, Finkelstein was awarded the very prestigious John Bates Clark medal, given annually to an \”American economist under the age of forty who is judged to have made the most significant contribution to economic thought and knowledge.\” In the Fall 2012 issue of the Journal of Economic Perspectives (where I labor in the fields as Managing Editor), Jonathan Levin and James Poterba offered an overview of Finkelstein\’s earlier career.

For example, standard models of the economics of insurance suggest that people who know that they are more likely to receive the insurance payout (more likely to get sick, for example) are more likely to seek out generous insurance policies. Sellers of Insurance need to beware this \”adverse selection\” dynamic, as it is called, or they can end up pricing their insurance as if it was for the average person, and then ending up with much higher payouts than expected. But does the evidence support the theory? Finkelstein points out that in a number of studies, those who get the insurance often do not end up receiving greater payouts. A possible reason is that some people are pretty safe risks in part because they are quite risk-averse, so they are more likely to purchase insurance and less likely to use it. Here are some comments from Finkelstein:

Suppose you have people—in health insurance we often refer to them as the “worried well”—who are healthy, so a low-risk type for an insurer, but also risk averse: They’re worried that if something happens, they want coverage. … As a result, people who are low risk, but risk averse, will also demand insurance, just as high-risk people will. And it’s not obvious whether, on net, those with insurance will be higher risk than those without. … We looked at long-term care insurance—which covers nursing homes—and rates of nursing home use. We found that individuals with long-term care insurance were not more likely to go into a nursing home than those without it, as standard adverse selection theory would predict. In fact, they often looked less likely to go into a nursing home. These results held even after controlling for what the insurance company likely knew about the individual, and priced insurance on. … [O]our data gave us a way to detect private information: people’s self-reported beliefs about their chance of going into a nursing home. And we showed that people who think they have a higher chance of going into a nursing home are both more likely to buy long-term care insurance and more likely to go into a nursing home. … That certainly sounds like the standard adverse selection models! … Then we found some examples in the data that we broadly interpreted as proxies for preferences such as risk aversion, and we found that individuals who report being more likely to, for example, get flu shots, or more likely to wear seatbelts, were both more likely to buy long-term care insurance and less likely to subsequently go into a nursing home.

In another prominent line of work. Finkelstein and several co-authors looked at the question of geographic variation in health care costs–that is, the well-known fact that health care utilization and spending per person is much higher in some urban areas and states than in others. They asked the question: What happens if a person relocates from a high-utilization, high-cost area to a low-cost, low utilization area? If one believes that health care decisions are determined by a mixture of patient expectations and what local health care providers think of as \”best practice,\” one might expect the health care usage of those who relocate to gradually trend toward the patterns of their new geographic location. But that\’s not what happens. Finkelstein explains:

\”We … look at people who moved geographically across areas with different patterns of health care utilization (i.e., high-utilization versus low-utilization areas) and whether their health care utilization changed. Originally, we were very focused on this issue of habit formation, which would suggest a very specific conceptual model and econometric specification. … So you would expect, in a model with habit formation, that maybe initially there wouldn’t be much change in your health care utilization. But over time—whether it’s because doctors would be urging you to do less or the people around you were like, “Why go to the doctor when you have a minor pain?”—you would gradually change your behavior toward the new norm.But that’s just not what we see at all. We have about 11 years of data on Medicare beneficiaries and about 500,000 of them who move across geographic areas. When they do, we see a clear, on-impact change: When you move from a high-spending to a low-spending place, or vice versa, you jump about 50 percent of the way to the spending patterns of the new place. But then your behavior doesn’t change any further. …  We estimate that about half of the geographic variation in health care utilization reflects something “fixed” about the patient that stays with them when they move, such as their health or their preferences for medical care. And about half of the geographic variation in health care utilization reflects something about the place, such as the beliefs and styles of the doctors there, or the availability of various medical technologies. This gives you a very different perspective on how to think about the geographic variation in health care spending than the prior conventional wisdom that most of the geographic variation in the health care system was due to the supply side—that is, something about the place rather than the patient.

In the last few years, some of Finkelstein\’s most prominent research has been an analysis of data generated by an experiment in the state of Oregon. Back in 2008, the state of Oregon wanted to expand Medicaid coverage to low-income people who wouldn\’t have otherwise been eligible for Medicaid. The state realized that it didn\’t have enough money to offer the expanded health insurance to everyone, so it held a lottery. From an academic research point of view, this decision was a dream come true, because it becomes possible to compare health and life outcomes for two very similar groups–one randomly chosen to receive additional health insurance and one not. Finkelstein and a team of co-authors were on the job. Finkelstein describes some of their findings:

For health care use, we found across the board that Medicaid increases health care use: Hospitalizations, doctor visits, prescription drugs and emergency room use all increased. On the one hand, this is economics 101. Demand curves slope down: When you make something less expensive, people buy more of it. And what health insurance does, by design, is lower the price of health care for the patient. … On the other hand, there were ways in which these results were surprising. For Medicaid, in particular, there’s been a lot of conjecture that while in general, health insurance would increase use of health care, that because Medicaid reimbursement rates to providers are so low, providers wouldn’t want to treat Medicaid patients. … Our findings reject this view. We find compelling evidence from a randomized evaluation that relative to being uninsured, Medicaid does increase use of health care. Another result that some found surprising was on use of the emergency room. There had been claims in policy circles that covering the uninsured with Medicaid might get them out of the emergency room … The hope that ER use would go down comes from the belief that doctor visits are substitutes for the ER, so when the doctor also becomes free, you go to the doctor instead of the emergency room. Maybe this is the case (or maybe it isn’t), but on net, our results show any substitution for the doctor that may exist is just not outweighed by the direct effect of making the emergency room free. On net, Medicaid increases use of the emergency room, at least in the first one to two years of coverage we are able to look at.

A variety of other findings have emerged from this research, which is ongoing. In the Oregon data, the additional health insurance reduced financial risk for households, and perhaps not coincidentally, also led to improvements in mental health status (measured both by self-reported mental health and by the proportion diagnosed with depression). In terms of measures of physical health, Finkelstein reports, \”we did not detect statistically significant effects on the physical health measures we studied: blood sugar, cholesterol and blood pressure.\”

The expansion of Medicaid in Oregon clearly brought at least some benefits to the previously uninsured. But what the cost to the state worth the benefits to the individuals? Finkelstein and a couple of co-authors tried to model what the insurance was worth to those receiving it. They found:

[O]ur central estimate is that the value of Medicaid to a recipient is about 20 to 40 cents per dollar of government expenditures. … The other key finding is that the nominally “uninsured” are not really completely uninsured. We find that, on average, the uninsured pay only about 20 cents on the dollar for their medical care. This has two important implications. First, it’s a huge force working directly to lower the value of Medicaid to recipients; they already have substantial implicit insurance. … Second and, crucially, the fact that the uninsured have a large amount of implicit insurance is also a force saying that a lot of spending on Medicaid is not going directly to the recipients; it’s going to a set of people who, for want of a better term, we refer to as “external parties.” They’re whoever was paying for that other 80 cents on the dollar.

For those who would likes some additional doses of Finkelstein, I\’ve posted a couple of times as results from the Oregon study were publishes, and you can check them out at \”Effects of Health Insurance: Randomized Evidence from Oregon\” (August 31, 2012) and \”Why the Uninsured Don\’t Have More Emergency Room Visits\” (January 6, 2014). Finkelstein has also published several articles in the Journal of Economic Perspectives, once on the subject of \”Long-Term Care Insurance in the United States\” (November 22, 2011) and another time in an article with Liran Einav in the Winter 2011 issue on the analysis of a \”Selection in Insurance Markets: Theory and Empirics in Pictures.\”

Causes of Wealth Inequality: Dynastic, Valuation, or Income?

There are at least three reasons why inequality of wealth could remain high or rise over time: 1) dynastic reasons, in which inherited wealth looms larger over time; 2) valuation issues, as when the price of existing assets like stocks or real estate soars for a time; and 3) a surge of inequality at the very top of the income distribution which generates a corresponding inequality in wealth. Richard Arnott, William Bernstein, and Lillian Wu \”agree that inequality of wealth has intensified in the recent past.\” However, challenge the importance of the dynastic explanation and emphasize the latter two causes in their essay, \”The Myth of Dynastic Wealth: The Rich Get Poorer,\” which appears n the Fall 2015 issue of the Cato Journal.

A substantial chunk of their essay is a review and critique of the arguments in Thomas Piketty\’s 2013 book Capital in the Twenty-First Century. I assume that even readers of this blog, who are perhaps more predisposed than normal humans to find such a discussion of interest, have mostly had enough of that. For those who want more, some useful starting points are my post on \”Piketty and Wealth Inquality\” (February 23, 2015) and on \”Digging into Capital and Labor Income Shares\” (March 20, 2015).

Here, I want to focus instead on the empirical evidence Arnott, Bernstein, and Wu about dynastic wealth in the United States. They focus on evidence from the Forbes 400 list of the wealthiest Americans, which has been published since 1982. They look both at how many famous fortunes of the earlier part of the 20th century survived to be on this list, and at the evolution of who is on this list over time. They write:

Take, as a counterexample, the Vanderbilt family. When the family converged for a reunion at Vanderbilt University in 1973, not one millionaire could be found among the 120 heirs and heiresses in attendance. So much for the descendants of Cornelius Vanderbilt, the richest man in the world less than a century before. … The wealthiest man in the world in 1918 was John David Rockefeller, with an estimated net worth of $1.35 billion. This was a whopping 2 percent of the U.S. GDP of $70 billion at that time, nearly two million times our per capita GDP, at a time when the nation was the most prosperous in the world. An equivalent share of U.S. GDP today would translate into a fortune of over $300 billion.  …  The Rockefellers … scored 13 seats on the 1982 Forbes debut list, with collective wealth of $7 billion in inflation-adjusted 2014 dollars. As of 2014, only one Rockefeller (David Rockefeller, who turned 100 in June 2015) remains, with a net worth of about $3 billion.If dynastic wealth accumulation were a valid phenomenon, we would expect little change in the composition of the Forbes roster from year to year. Instead, we find huge turnover in the names on the list: only 34 names on the inaugural 1982 list remain on the 2014 list … 

Arnott, Bernstein, and Wu offer a number of ways in which dynastic wealth is eroded from one generation to the next: 1) low returns (including when the rich \”fall prey to knaves); 2) investment expenses paid to \”bank trust companies, `wealth management\’ experts, estate attorneys, and the like\”; 3) income, capital gains, and estate taxes; 4) charitable giving; 5) when fortunes are divided among heirs; and 6) spending, as in when some heirs do a lot of it. Their overall finding based on the patterns in their data is that among the hyper-wealthy, the common pattern is for real net worth to be cut in half every 14 years or so, and for it to decline by about 70% from one generation to the next.

If the inequality of wealth is not a dynastic phenomenon and dynastic wealth in fact tends to fade with time, then why has inequality of wealth remained high in recent decades. Arnott, Bernstein, and Wu suggest two alternatives.

One is the huge run-up in asset values in recent decades, including the stock market. However, the authors make an important and intriguing point about these valuations. From a long-run viewpoint, gains from stock market investment need to be connected to the profits earned by companies. In the last few decades, a major change in US stock market is that dividends paid by firms have dropped. In In the past, those who owned stock looked less wealthy right now, but because of owning stock they could often expect to receive a hefty stream of dividend payments in the future. Now, those who own stock look more wealthy right now (after the run-up in stock prices), but they appear likely to receive a lower stream of dividend payments in the future. Thus, more of the future profit performance of a company is showing up in the current price of the stock, and less as a payment of dividends in the future. This is a more complex phenomenon than a simple rise in wealth inequality.

The other change that they point to are the enormous payments being received by corporate executives, often through stock option. The authors are writing in a publication of the libertarian Cato Institute. Thus, it is no surprise when they write: \”We have no qualms about paying entrepreneurial rewards (i.e., vast compensation) to executives who create substantial wealth for their shareholders or
who facilitate path-breaking innovations and entrepreneurial growth.\” But then they go on to add:

But an abundance of research shows little correlation between executive compensation and shareholder wealth creation (let alone societal wealth creation). Nine-figure compensation packages are so routine they only draw notice when the recipients simultaneously run their companies into the ground, as was the case with Enron, Global Crossing, Lehman Brothers, Tyco, and myriad others. It’s difficult for an entrepreneur to become a billionaire, in share wealth, while running a failing business. How can even mediocre corporate executives take so much of the pie? Bertrand and Mullainathan (2001) cleverly disentangled skill from luck by examining situations in which earnings changes could be reasonably ascribed to luck (say, a fortuitous change in commodity prices or exchange rates). They found that, on average, CEOs were rewarded just as much for “lucky” earnings as for “skillful” earnings. The authors postulate what they term the “skimming” hypothesis: “When the firm is doing well, shareholders are less likely to notice a large pay package.” A governance linkage is also evident: The smaller the board, the more insiders on it, and the longer tenured the CEO, the more flagrant “pay for luck” becomes, while the presence of large shareholders on the board serves to inhibit skimming. Perhaps shareholders should be more attentive to governance?