Women, Mathematical Skills, Academia

Focus on the so-called STEM departments in academia: that is, science, technology, engineering and mathematics. There is a fairly clear pattern that women are less well-represented in the academic departments that rely on higher mathematical skills. The harder questions are explaining this phenomenon.  Stephen J. Ceci, Donna K. Ginther, Shulamit Kahn, and Wendy M. Williams address these issues in their article \”Women in Academic Science: A Changing Landscape,\” which appears in the December 2014 issue of Psychological Science in the Public Interest. Here\’s a taste of their conclusions:

We conclude by suggesting that although in the past, gender discrimination was an important cause of women’s underrepresentation in scientific academic careers, this claim has continued to be invoked after it has ceased being a valid cause of women’s underrepresentation in math-intensive fields. Consequently, current barriers to women’s full participation in mathematically intensive academic science fields are rooted in pre-college factors and the subsequent likelihood of majoring in these fields, and future research should focus on these barriers rather than misdirecting attention toward historical barriers that no longer account for women’s underrepresentation in academic science.

Here\’s an illustrative figure that helps to illustrate the starting point for Ceci, Ginther, Kahn, and Williams. Each of the points is a STEM department. The authors divide up their analysis into what they call the LPS departments, which are the three in the upper right with more females among those who get a PhD and relatively lower math scores on the GRE exam, and what they call the GEEMP departments, which are the departments to the bottom right with a smaller share of females among those who get a PhD in that field and higher average math scores on the GRE exam. These sorts of differences in PhDs granted to females are reflected in large gaps in the number of female professors in these fields.


Ginther and Kahn are economists, while Ceci and Williams are psychologists. Thus, the paper combines economics-style analysis of career development patterns with psychology-style analysis of how people learn. For economists, a standard approach is to look at the \”pipeline\” to producing tenured professors. For example, you can look at how many females major in STEM subjects in college, and what proportion go on to a PhD, and then to various academic jobs. The idea is to see where there is \”leakage\” in the pipeline–and thus identify the barriers to women professors.  When they carry out this analysis, the authors offer a (to me) surprising conclusion: the LPS fields have a relatively substantial \”leakage\” when comparing how females and males move from undergrad majors to grad school and professorships. But in the GEEMP areas–which includes economics–women and men in recent years proceed from undergrad degrees through grad school and into professorships at similar rates. After reviewing a range of evidence, they write:

Thus, the points of leakage from the STEM pipeline depend on the broad discipline being entered—LPS or GEEMP. By graduation from college, women are overrepresented in LPS majors but far underrepresented in GEEMP fields. In GEEMP fields, by 2011, there was very little difference in women’s and men’s likelihood to advance from a baccalaureate degree to a PhD and then, in turn, to advance to a tenure-track assistant professorship. … [O]nce women are within  GEEMP fields, their progress resembles that of male GEEMP majors. In contrast, whereas far more women than men major in LPS fields, in 2011, the gender difference in the probability of advancing from an LPS baccalaureate degree to a PhD was not trivial, and the gap in the probability of advancing from PhD to assistant professorship was particularly large, with fewer women than men advancing.

The message is that the the most substantial barriers to women in economics and other GEEMP fields arise before college. Why might this be so? One set of explanations focuses on the high scores for boys on a wide range of math tests. The other set of explanations focuses on social expectations about interests and careers. Of course, these explanations become entangled, because getting skills is interrelated with social expectations.

With regard to higher math scores for boys, the paper reviews evidence on how in utero exposure to  androgen hormones is greater for boys, and how certain math-related abilities (like 3D spatial processing) appear to be greater for boys at young ages. I\’ll skip past that evidence here because: i) as the authors note, it\’s far from definitive; ii) I lack any particular competence to evaluate this evidence, anyway. Instead, let me stick to several points that seem well established.

In terms of the basic data from math scores themselves, it used to be true that math test scores for boys were higher than those for girls, but on average, high school girls have now caught up. The authors note: \”However, by the beginning of the 21st century, girls had reached parity with boys—including on the hardest problems on the National Assessment of Educational Progress (NAEP) for high school students.\” It also seems true that at the top of the distribution of math test scores, boys substantially outnumber girls: \”Thus, a number of very-large-scale analyses converged on the conclusion that there are sizable sex differences at the right tail of the math distribution.\” One of many studies they discuss looked at the \”Programme for International Student Assessment data set for the 33 countries that provided data in all waves from 2000 to 2009. They, too, found large sex differences at the right tail: 1.7:1 to 1.9:1 favoring males at the top 5% and 2.3:1 to 2.7:1 favoring males at the top 1%.\”

There is an ongoing nature-vs.-nurture argument about how to interpret these higher math scores at the top. Not only have gender differences in math scores changed over time, but they also \”vary by cohort, nation, within-national ethnic groups, and the form of test used. … Moreover, mathematics is heterogeneous, comprising many different cognitive skills …\” At a minimum, these patterns suggest that gender gaps in test scores are quite sensitive to environmental factors. For example, in Iceland, Singapore, and Indonesia, more girls than boys scored at the top 1% of math tests at certain ages.

Some of the evidence the authors cite on the importance of social environment in affecting math scores comes from a Spring 2010 symposium in the Journal of Economic Perspectives on \”Tests and Gender.\” (Full disclosure: I\’ve been Managing Editor of JEP since its first issue in 1987. All JEP articles back to the first issue are freely available on-line at the journal\’s website.)

For example, in that issue of JEP, Devin G. Pope and Justin R. Sydnor look at \”Geographic Variation in the Gender Differences in Test Scores\” across U.S. states and regions. Here\’s an illustrative finding based on scores from 8th graders on the National Assessment of Educational Progress (NAEP). The vertical axis shows that in every region, the female-male ratio in the top 5% of reading scores is greater than 2, almost reaching 3 in Mountain states. The horizontal axis shows that in every ratio, the male-female ratio in the top 5% of math and science scores ranges from 1.3 in the New England States to 2.2 in the Middle Atlantic states. This finding confirms the fact of a difference in math test scores at the extreme. It also strongly suggests that such differences strongly affected by where you live–and thus are strongly linked to social expectations.

In another paper in the 2010 JEP symposium, Glenn Ellison and Ashley Swanson look at \”The Gender Gap in Secondary School Mathematics at High Achievement Levels: Evidence from the American Mathematics Competitions.\” In a striking finding, the note that most U.S. high school girls who participate in international math competitions come from a very small pool of about 20 high schools. This finding strongly suggests that many other girls, if they were in a different academic setting, would demonstrate high-end math skills. Ellison and Swanson write:

[W]e examine extreme high-achieving students chosen to represent their countries in international competitions. Here, our most striking finding is that the highest-scoring boys and the highest-scoring girls in the United States appear to be drawn from very different pools. Whereas the boys come from a variety of backgrounds, the top-scoring girls are almost exclusively drawn from a remarkably small set of super-elite schools: as many girls come from the 20 schools that generally do best on these contests as from all other high schools in the United States combined. This suggests that almost all American girls with extreme mathematical ability are not developing their mathematical talents to the degree necessary to reach the extreme top percentiles of these contests.

Finally, there is intriguing evidence that a number of women with equivalent math skills may not perform as well in the context of competitive and high-stakes math testing.  In the 2010 JEP symposium, Muriel Niederle  and Lise Vesterlund look at a range evidence on \”Explaining the Gender Gap in Math Test Scores: The Role of Competition.\” I was especially struck by this study:

They examine the performance of women and men in an entry exam to a very selective French business school (HEC) to determine whether the observed gender differences in test scores reflect differential responses to competitive environments rather than differences in skills. The entry exam is very competitive: only about 13 percent of candidates are accepted. Comparing scores from this exam reveals that the performance distribution for males has a higher mean and fatter tails than that for females. This gender gap in performance is then compared both to the outcome of the national high school graduation exam, and for admitted students, to their performance in the first
year. While both of these performances are measured in stressful environments, they are much less competitive than the entry exam. The performance of women is found to dominate that of men, both on the high school exam and during the first year at the business school. Of particular interest is that females from the same cohort of candidates performed signififi cantly better than males on the national high school graduation exam two years prior to sitting for the admission exam. Furthermore, among those admitted to the program they find that within the first year of the M.Sc. program, females outperform males.

A possible reason here is a well-known phenomenon called \”stereotype threat\”–that is, if reminded of a negative stereotype about a group to which you belong before a test, people often perform worse. Here\’s one study that Ceci, Ginther, Kahn, and Williams cite along these lines: \”For example, female test takers who marked the gender box after completing the SAT Advanced Calculus test scored higher than female peers who checked the gender box before starting the test, and this seemingly inconsequential order effect has been estimated to result in as many as 4,700 extra females being
eligible to start college with advanced credit for calculus had they not been asked to think about their gender before completing the test …\”

To recap the argument to this point, the basic question is why women are underrepresented in academic disciplines in certain STEM fields where math scores are higher. For current students, the main underlying reasons seem to trace back to the choices that college students make about undergraduate majors. In turn, a possible explanation is that more males get high scores on pre-college math tests than do women. In turn, a substantial part of this difference seems to trace to social expectations about gender and math, and about gender and test-taking. If more women felt more positive about math before reaching college, then majors in GEEMP areas would presumably tend to rise.

But there is also a different set of arguments about why fewer women sign up for the GEEMP disciplines as undergraduates, which suggests that whole issue of math test scores may be a distraction. For example, it\’s not clear how much the gender difference in math scores at the extreme top end should matter for academia. As the authors point out above, the typical GRE math scores for those in the math-oriented GEEMP fields was about at the 75th percentile–not the top 1%. Another intriguing fact is that women have now been receiving 40-45% of math Ph.D\’s for the last few decades. This alternative view focuses less on math skills and more on perceptions about self and occupation. The Ceci, Ginther, Kahn, and Williams team points out (some citations omitted):

Psychologists have charted large sex differences in occupational interests, with women preferring so-called “people-oriented” (or “organic,” or natural science) fields and men preferring “things” (people- and thing-oriented individuals are also termed “empathizers” and “systematizers,” respectively. This people-versus-things construct … is one of the salient dimensions running through vocational interests; it also represents a difference of 1 standard deviation between men and women in vocational interests. Lippa has repeatedly documented very large sex differences in occupational interests, including in transnational surveys, with men more interested in “thing”-oriented activities and occupations, such as engineering and mechanics, and women more interested in people-oriented occupations, such as nursing, counseling, and elementary school teaching. And in a very extensive meta-analysis of over half a million people, Su, Rounds, and Armstrong (2009) reported a sex difference on this dimension of a full standard deviation.

In other words, the reason that fewer women choose the GEEMP disciplines as undergraduates–and thus the reason that women are underrepresented as faculty in those areas–may be less related to math skills and more related to this distinction between people-oriented and thing-oriented.

In the context of economics, it seems to me true, and also deeply frustrating, that this distinction does capture something about how the field is perceived. Economics is the stuff of life: full of choices that people make about work, consumption, saving, parenthood, and crime, as well as about the structure and decisions of organizations like firms and government that affect people\’s daily lives in profound ways. But the perception that many students have of economics, which is sometimes unfortunately confirmed by how the subject is taught, can lose track of the people, instead viewing the economy as a thing.

How Did Germany Limit Unemployment in the Recession?

Here\’s a puzzle: During the Great Recession, the total contraction in economic output was noticeably larger in Germany than in the United States, but the rise in the unemployment rate was noticeably higher in the United States than in Germany. How did Germany manage it? Shigeru Fujita and Hermann Gartner offer \”A Closer Look at the German Labor Market ‘Miracle’\” in the most recent issue of the Business Review published by the Federal Reserve Bank of Philadelphia, (Q4, 2014, pp. 16-24). 

Let\’s start by stating the puzzle clearly. The top figure shows the change in  unemployment rates for the U.S. and Germany during the recession. The bottom figure shows the fall in real output in each economy.

The authors consider two main alternative explanations for this puzzle, and at least from a U.S. perspective, they come from different ends of the political spectrum. One possible set of explanations is that German unemployment stayed relatively low because of government programs, like the short-time work program that help firms adjust to shorter hours without firing employees.  The other possible set of explanations is that German unemployment stayed relatively low because of earlier labor market reforms that reduced unemployment benefits and kept wages and benefits lower and more flexible, which in turn encouraged a growth of jobs.  Fujita and Gartner argue that the second set of explanation is more plausible. 

Germany does have several government programs that encourage firms to reduce hours when business slows down, rather than firing employees. But Fujita and Gartner argue that these programs have existed in past recessions, and they didn\’t seem to have any particularly large effect in the most recent recession. They write: 

One is the shorttime work program. When employees’ hours are reduced, the participating firm pays wages only for those reduced hours, while the government pays the workers a “short-time allowance” that offsets 60 percent to 67 percent of the forgone earnings. Moreover, the firm’s social insurance contributions on behalf of employees in the program are lowered. In general, a firm can use this program for at most six months. At the beginning of 2009, though, when the slowdown of the economy became apparent, the German government encouraged the use of the program by expanding the maximum eligibility period first to 18 months and then to 24 months and by further reducing the social security contribution rate. The usual eligibility requirements were also relaxed. 

An important thing to remember here is that these special rules had also been applied in past recessions and thus were not so special after all. True, the share of workers in the program increased sharply in 2009, and thus it certainly helped reduce the impact of the Great Recession on German employment. But a more important observation is that even at its peak during the Great Recession, participation in the program was not extraordinary compared with the levels observed in past recessions. Moreover, in previous recessions, the German labor market had responded in a similar manner to the U.S. labor market. 

Another German program that some have credited with staving off high unemployment is the working-time account, which allows employers to increase working hours beyond the standard workweek without immediately paying overtime. Instead, those excess hours are recorded in the working-time account as a surplus. When employers face the need to cut employees’ hours in the future, they can do so without reducing workers’ take-home pay by tapping the surplus account. German firms overall came into the recession with surpluses in these accounts. Thus, qualitatively speaking, this program certainly reduced the need for layoffs. However, less than half of German workers had such an account, and most working-time accounts need to be paid out within a relatively short period — usually within a year or less. According to Michael Burda and Jennifer Hunt, the working-time account program reduced hours per worker by 0.5 percent
in 2008-09, accounting for 17 percent of the total decline in hours per worker in that period.

To understand the allure of the alternative explanation, consider this graph showing the German employment rate in recent decades. Notice that after around 2003, Germany employment starts steadiily rising, and that trend shows only a hiccup during the Great Recession. 
What caused German employment to start rising around 2003? 

We argue that the underlying upward trend was made possible by labor market policies called the Hartz reforms, implemented in 2003-05. … The Hartz reforms are regarded as one of the most important social reforms in modern Germany. The most important change was in the unemployment benefit system. Before the reforms, when workers became jobless, they were eligible to receive benefits equal to 60 percent to 67 percent of their previous wages for 12 to 32 months, depending on their age. When these benefits ended, unemployed workers were eligible to receive 53 percent to 57 percent of their previous wages for an unlimited period. Starting in 2005, the entitlement period was
reduced to 12 months (or 18 months for those over age 54), after which recipients could receive only subsistence payments that depended on their other assets or income sources. Moreover, unemployed workers who refused reasonable job offers faced greater and more frequent sanctions such as cuts in benefits. To further lower labor costs and spur job creation, the size of firms whose employees are covered by unemployment insurance was raised from five to 10 workers. Also, regulation of temporary contract workers was relaxed. Furthermore, starting in 2004, the German Federal Employment Agency and the local employment agencies were reorganized with a stronger focus on returning the unemployed to work and by, for example, outsourcing job placement services to the private sector.

An earlier post from February 14, 2014,  \”A German Employment Miracle Narrative,\” argues that the flexibility of German wages and labor market institutions starting in the mid-1990s started the rise in German employment. In this story, the Hartz reforms take on less importance, but the emphasis of the story is still on greater flexibility in markets, not government programs for sharing hours. Fujita and Gartner make a similar point: \”In other words, in the boom leading up to the Great Recession, wage growth was much more muted than during previous booms, and thus this wage moderation was an important factor in creating the upward trend in employment.\”
A final point from Fujita and Gartner is that the comparison from the U.S. to Germany isn\’t apples-to-apples, because the underlying causes of the recessions were different. Germany didn\’t have a housing bubble; instead, it had an export bust. The incentives for what kind of financial crisis emerges and for laying off workers may be rather different in these two different kinds of recessions. They write: 

The recession in Germany was brought about by a different shock than that which triggered the recession in the U.S. The U.S. economy suffered a decline in domestic demand as the plunge in home values reduced households’ net wealth, whereas Germany had experienced no housing bubble. Instead, the decline in German output was driven by a short-term plunge in world trade. Whether a recession is expected to be short or long-lasting is an important factor in firms’ hiring and firing decisions. If a firm expects a downturn to last only a short period, it may well choose not to cut its work force, even though it faces lower demand, especially if laying off and hiring workers is costly, as it is in Germany. Consistent with this possibility, Burda and Hunt point out anecdotal
evidence that, especially by 2009, German firms were reluctant to lay off their workers because of the difficulty in finding suitable replacements.

Of course, the argument that German unemployment didn\’t rise as much because of reductions in unemployment benefits, low wage growth, and flexible labor markets doesn\’t prove that German innovations like the short-time allowance or working-time accounts are a bad idea. They may still be moderately helpful. But it doesn\’t look like they are the main explanation for Germany\’s success in limiting the rise in unemployment during and after the recession.

A Global Health Care Spending Slowdown: Temporary or Permanent?

The growth in health care spending has been putting pressure on government budgets all over the world, and in the U.S., it puts pressure on household budgets, too. However, the rise in U.S. healthcare costs has slowed in the last few years, which has led to a dispute. Is the slowdown in health care spending mainly a reflection of the Great Recession and the sluggish economic growth that has followed? Or does it represent the start of a potentially long-run slowdown in rising health care costs? Partisans of the Obama administration, like the White House Council of Economic Advisers, like to argue that the Patient Protection and Affordable Care Act of 2010 may be an important part of slowing down health care costs, too.

As I\’ve argued in the past (here and here), U.S. health care spending seemed to slow down in the mid-2000s, well before any cost-constraining measures of the 2010 legislation could take effect. In addition, the slowdown in health care costss has been international, which suggests that changes in U.S. law are not the driving factor. In the December 2014 issue of Finance & Development, Benedict Clements, Sanjeev Gupta, and Baoping Shang offer more explanation on the international dimensions of health care costs in high-income countries in their article, \”Bill of Health.\”

Here is there figure showing the pattern of health care spending across OECD countries in the last few decades. Notice that there are several times when it looks as if health care costs are slowing for a few years, before they start rising again. In discussing the recent slowdown in the rise in health care costs, they note: \”The slowdown in growth for all types of spending in nearly all advanced economies—and at about the same time—suggests that it was driven by a common factor. The common element appears to be the global financial crisis, which affected economic activity and governments’ capacity to finance continued health care spending growth.\”

Moreover, the authors point out that the countries where the global recession hit hardest, circled in red, have had the biggest slowdown in health care costs, where the countries where the recession was weakest, circled in green, have had less of a slowdown in health care costs.

Of course, it\’s theoretically possible that health care spending in almost every other country was slowing down because of the recession, and short-run cuts in government health care spending, while health care spending in the U.S. was slowing down for long-run reasons driven by the 2010 legislation. But it\’s pretty unlikely. Indeed, when the authors project how much public health spending will rise as a percent of GDP in the next 15 years, they project that the rise will be largest of all in the U.S.–thus squeezing government budgets even further. Their predictions separate out the aging of the population, in blue, from the rest of the increase in health care spending, in green. 

Automation and Job Loss: The Fears of 1964

A half-century ago, there was deep and widespread concern that automation and new technology were leading to chronically high levels of unemployment. In retrospect, we know the fear at that time was unfounded. But it is nonetheless fruitful to review the controversy.

To set the stage, the U.S. economy suffered 10 months of recession from April 1960 to February 1961. The unemployment rate rose from 5.0% in June 1959 to 7.1% by May 1961. A widespread fear was that the job losses were due to the arrival of automation and electronic technology. For example, here are some excerpts from a TIME magazine article on February 24, 1961, \”The Automation Jobless.\”

The rise in unemployment has raised some new alarms around an old scare word: automation. How much has the rapid spread of technological change contributed to the current high of 5,400,000 out of work? … While no one has yet sorted out the jobs lost because of the overall drop in business from those lost through automation and other technological changes, many a labor expert tends to put much of the blame on automation. … Dr. Russell Ackoff, a Case Institute expert on business problems, feels that automation is reaching into so many fields so fast that it has become \”the nation\’s second most important problem.\” (First: peace.)

The number of jobs lost to more efficient machines is only part of the problem. What worries many job experts more is that automation may prevent the economy from creating enough new jobs. … Throughout industry, the trend has been to bigger production with a smaller work force. … Many of the losses in factory jobs have been countered by an increase in the service industries or in office jobs. But automation is beginning to move in and eliminate office jobs too. … In the past, new industries hired far more people than those they put out of business. But this is not true of many of today\’s new industries. … Today\’s new industries have comparatively few jobs for the unskilled or semiskilled, just the class of workers whose jobs are being eliminated by automation.

Thus, President John F. Kennedy–who probably edged out Richard Nixon in the 1960 presidential race in substantial part due to the seemingly dicey state of the economy at the time–delivered a speech to a joint session of Congress on May 25, 1961. The speech has become best-known for Kennedy\’s call to put a man on the moon. But that was part IX of the speech. Much earlier in section II, Kennedy stated:

I am therefore transmitting to the Congress a new Manpower and Training Development program to train or retrain several hundred thousand workers particularly in those areas where we have seen chronic unemployment as a result of technological factors and new occupational skills over a four-year period, in order to replace those skills made obsolete by automation and industrial change with the new skills which the new processes demand. 

The U.S. unemployment rate had declined back to the range of 5.0% by August 1964, but concerns over how the U.S economy might adapt to technology and automation remained serious enough that President Lyndon Johnson signed into law a National Commission on Technology, Automation, and Economic Progress. The Commission eventually released its report in February 1966. when the unemployment rate had fallen to 3.8%.

Before reviewing the tone and findings of the Commission, I\’ll just note that when I run into people who are concerned that technology is about to decimate U.S. jobs, I sometimes bring up the 1964 report. The usual response is to dismiss the 1964 experience very quickly, on the grounds that the current combination of information and communications technology, along with advanced in robotics, represent a totally different situation than in 1964. It\’s of course true that modern technologies differ from those of a half-century ago, but that isn\’t the issue. The issue is how an economy and a workforce makes a transition when new technologies arrive. It is a fact that technological shocks have been happening for decades, and that the U.S. economy has been adapting to them. The adaptations have not involved a steadily rising upward trend of unemployment over the decades, but they have involved the dislocations of industries falling and rising in different locations, and a continual pressure for workers to have higher skill levels.

It is of course theoretically possible that the technological changes of our own time will be profoundly different than anything which has come before. There is of course no way of proving that something in the future either will or will not be completely different than what has come before, but I am highly wary of such claims. After all, history also reminds us of that claims about how the present moment is utterly unique can sound plausible at the time, but look less plausible even just a few years or a decade later. What strikes me in looking back at the 1966 report is how much the description of the problem sounds quite modern, but how the recommendations of policies sound by contemporary standards fairly extreme.

For a sample, here\’s an overall perspective on technology and jobs from Chapter Two of the 1966 Commission report:

We believe that the general level of unemployment must be distinguished from the displacement of particular workers at particular time and places, if the relation between technological change and unemployment is to be clearly understood. The persistence of a high general level of unemployment in the years following the Korean war was not the result of accelerated technological progress. Its cause was interaction between rising productivity, labor force growth, and an inadequate response of aggregate demand.  This is firmly supported by the response of the economy to the expansionary fiscal policy of the last 5 years. Technological change on the other hand, has been a major factor in the displacement and temporary unemployment of particular workers. Thus technological change (along with other forms of economic change) is an important determinant of the precise places, industries, and people affected by unemployment. But the general level of demand for goods and services is by far the most important factor determining how many are affected, how long they stay unemployed, and how hard it is for new entrants to the labor market to find jobs. The basic fact is that technology eliminates jobs, not work. It is the continuous obligation of economic policy to match increases in productive potential with increases in purchasing power and demand. Otherwise the potential created by technical progress runs to waste in idle capacity, unemployment, and deprivation.\”

My guess is that a lot of contemporary economists could still sign on to most of this sentiment, a half-century later, although there would be squabbling on a few points.  For example, economic discussions in the early 1960s put a heavy emphasis on Keynesian-style stimulation of aggregate demand, and at least some modern economists would put more emphasis on supply-side growth and adjustment problems. The focus here is primarily on job loss and unemployment, while a modern economist would be likely to focus at least as much on issues about rising inequality. And of course, the claim that \”The basic fact is that technology eliminates jobs, not work\” proved true for the 1960s, but there is controversy over whether it will continue to be true.

The 1966 Commission report offers a long list of recommendations, and I found it interesting to consider how many of the topics are still very much with us 50 years later.  It\’s worth remembering that this is a Commission appointed by a Democratic President at the heart of what came to be called Johnson\’s \”Great Society\” wave of legislation. That said, here\’s a sampling of the recommendations:

\”We recommend a program of public service employment, providing, in effect, that the Government be an employer of last resort, providing work for \”hard-core unemployed\” in useful community enterprises.\”

\”We recommend that economic security be guaranteed by a floor under family income. That floor should include both improvements in wage-related benefits and a broader system of income maintenance for those families unable to provide for themselves.\”

\”We recommend compensatory education for those from disadvantaged environments, improvements in the general quality of education, universal high school education and opportutnity for 14 years of free public education, eliminatino of financial obstacles to higher education, lifetime opportunities for education, training, and retraining …\” 

\”We recommend the creation of a national computerized job-man matching system which would provide more adequate information on employment opportunities and available workers on a local, regional, and national scale. In addition to speeding job search, such a service would provide better information for vocational choice …\” 

\”We recommend that present experimentation with relocation assistance to workers and their families stranded in declining areas be developed into a permanent program.\”

\”We recommend … regional technical institutes to serve as centers for disseminating scientific and technical knowledge relevant to the region\’s develoment …\” 

There\’s more, including discussion of how to encourage the use of technology to address health and environmental concerns, to improve workplace conditions, and to make government work better. Much of this list is more about overall goals (\”improvements in the general quality of education\”) than about details of how public policy might address these goals. But viewed as a list of areas for concern, this list of priorities for helping a modern workforce adjust over time to changes in technology seems quite relevant today, a half-century later. The notion that this list still seems so relevant a half century later is in part, no doubt, because the underlying issues are hard ones. But it also seems a depressing commentary on some central inadequacies of public policy in the last half-century, and a grim commentary on the irrelevance of much of what passed for public debate in the 2014 election season.

College Studying: 16 Hours Per Week?

College freshmen typically study about 15 hours per week, according to the just-released National Survey of Student Engagement Annual Report 2014. The report is a wealth of information about different how different kinds of learning occur in colleges and universities, how faculty spend their time, and how students spend their time. But here\’s the figure that jumped out at me.

The focus of the figure is that freshmen who outperform their own expectations for grades study about 17 hours per week, while those who underperform their expectations average about 14 hours of studying per week. For the average student, the issue isn\’t that they are working lots of hours at a paid job, although the underperforming students seem to work just a little more on average.

The survey has been carried out for some years by researchers at the Indiana University Center for Postsecondary Research, and is funded by the Carnegie Foundation for the Advancement of Teaching. Survey results are based on \”355,000 census-administered or randomly sampled first-year and senior students attending 622 U.S. bachelor’s degree-granting institutions that participated in NSSE in spring 2014.\”

At the risk of sounding all grinchy on this topic 14 or 17 hours per week of studying doesn\’t seem to me nearly enough. Over a seven-day week, this is an average of 2 or 2 1/2 hours per day of studying. And remember that this data is based on surveying students, so my guess is that if there\’s any bias in this estimate, it\’s more likely to involve students overestimating their actual study time rather than underestimating it.

My usual advice to undergraduate students is that I think of the academic side of college as a time commitment equal to a full-time job or a bit more–say, a time commitment of 40-50 hours per week, including class time. So if you are actually in class for maybe 12-15 hours per week, then a student should be expecting to put in perhaps another 30 hours of study time each week for all classes, on average.  The good news here, I guess, is that if you are student and willing to work the extra hours, you are likely to stand out over a lot of your fellow-students who aren\’t willing to make the commitment. The bad news is that a lot of colleges and universities don\’t seem to be asking much of their students, and their students are living down to those expectations.

Lower Working Age Population and Secular Stagnation

The \”working-age population\” is often defined as those from age 15-64. For many high-income and emerging market economies, the growth in the size of the working-age population has been dropping. Indeed, in Japan the working-age population started contracting back in the mid-1990s; the size of the working age population in the European Union (excluding the UK) started contracting about 2010; and the working age population in China (after several decades of a one-child policy) is projected to start contracting in the next few years. For most high-income countries, the working-age share of the population is in decline.

A decline in the size or population share of the working-age population is a concern for several reasons. In the last few decades, the commonly expressed concern was that it was going to be difficult to finance retirement and health benefits for the growing elderly population. More recently, the concern has been that slower growth in the working age population might also slow down economic growth. This argument was central to the case put forward by Alvin Hansen in the 1938 speech where he raised questions of whether the U.S. economy had entered a phase of \”secular stagnation\”–that is, a permanent slowdown in economic growth.

For example, Hanson said: \”[F]or our purpose we may say that the constituent elements of economic progress are (a) inventions, (b) the discovery and development of new territory and new resources, and (c) the growth of population. Each of these in turn, severally and in combination, has opened investment outlets and caused a rapid growth of capital formation.\” Hansen then noted that population growth had slowed down and that US territory was no longer expanding. Thus, he argued: \”We are thus rapidly entering a world in which we must fall back upon a more rapid advance of technology than in the past if we are to find private investment opportunities adequate to maintain full employment. … It is my growing conviction that the combined effect of the decline in population growth, together with the failure of any really important innovations of a magnitude sufficient to absorb large capital outlays, weighs very heavily as an explanation for the failure of the recent recovery to reach full employment.\”

For a sense of the slowdown in the growth rate of the working-age population since 1970 and in the next few decades, here\’s a figure from the \”Free Exchange\” column in the November 22 issue of the Economist. The U.S. economy, with a relatively high birthrate and relatively high levels off immigration, is projected to have slower growth of its working-age popoulation–but not an actual decline.

Here\’s a figure from a November 14 post by \”the data team\” at the Economist blog, showing the share of the population of working age. Notice that for Germany and Japan, the share of the population in the 15-64 age bracket peaked more than two decades ago. For the U.S., the peak in the working-age population is only a couple of years back. All of the high-income countries shown here are projected to have a sharp decline in the working-age share of the popoulation in the next few decades, although the share in the U.S.  is projected to remain the largest.

Why might a lower working-age population lead to a slower rate of economic growth? One reason is just mechanical: that is, Other things at least roughly equal, a 1% rise in the number of workers will add about 1% to the GDP.  But this factor only means that we should focus on the growth of per capita or per worker GDP, thus adjusting for the slower growth rate.

Two other concerns are potentially more serious. First, when the working-age population is growing, firms have an ongoing necessity to expand their investment spending, just to keep up with the number of workers. Conversely, slower growth of the working-age population reduces incentives to invest. Second, if the slower-growing or relatively smaller working age population finds that it must bear a substantially higher tax burden to support the growing proportion of elderly, the disincentives to work could become a factor in slowing growth.

What are the broad policy implications of a slow-growing or relatively smaller working age population? U.S. investment levels have indeed been lower than expected in recent years, given that the Great Recession officially ended back in mid-2009. Following in the lines of Alvin Hansen\’s 1938 discussion, one can imagine three possibilities for minimizing the risk of secular stagnation.

First, one might try to avoid the population decline. Government support for family-friendly policies hasn\’t had much substantial effect on falling birthrates across high-income countries. But there are other possibilities. The United States has relatively open border to legal immigration, not to mention illegal immigration, which increases the working-age population. Also, one can imagine expanding the definition of \”working age\” so that it considers workers in the 65-75 age range. A number of steps could be taken to encourage a larger share of these workers to remain in the workforce, at least part time.

Second, Hansen emphasized \”the discovery and development of new territory and new resources.\” Substantial discoveries of new territory seem implausible, but the possibilities of expanding trade by active participation in the globalizing economy remain viable. Also, the U.S. economy has the capacity for a considerable expansion of its energy resources. As I have argued in an earlier post, I personally favor what I like to call \”The Drill-Baby Carbon Tax: A Grand Compromise on Energy Policy.\” Such a policy would move ahead with all deliberate speed both on developing U.S. energy resources and also on finding ways to reduce carbon and other emissions.

Finally, Hansen mentioned the potential for new technology to create opportunities for investment in physical capital, so that that new technology can boost productivity and the surge in investment spending can also give a boost to macroeconomic demand. Potential policies here include encouraging a surge in public and private infrastructure, as well as aiming to double or triple the level of national research and development spending.

When the working-age population is growing, it stirs up the economy in a way that is often conducive to growth. With growth of the working-age population slowing, alternative ways of stirring up the economy and encouraging growth become even more important.

An Economist Chews Over Thanksgiving

As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there\’s anything wrong with that. (Note: This is an updated and amended version of a post that was first published on Thanksgiving Day 2011.)

The last time the U.S. Department of Agriculture did a detailed \”Overview of the U.S. Turkey Industry\” appears to be back in 2007, although an update was published in April of this year. Some themes about the turkey market waddle out from that report on both the demand and supply sides.
On the demand side, the quantity of turkey per person consumed rose dramatically from the mid-1970s up to about 1990, but since then has declined somewhat. The figure below is from the Eatturkey.com website run by the National Turkey Federation. Apparently, the Classic Thanksgiving Dinner is becoming slightly less widespread.

On the production side, the National Turkey Federation explains: \”Turkey companies are vertically integrated, meaning they control or contract for all phases of production and processing – from breeding through delivery to retail.\” However, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which all the steps of turkey production have become separated and specialized–with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys. Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:

\”In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg capacity per hatchery in 2007.

Some decades ago, turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised while supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.

Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.

Turkeys have been carefully bred to become the efficient meat producers they are today. In 1986, a turkey weighed an average of 20.0 pounds. This average has increased to 28.2 pounds per bird in 2006. The increase in bird weight reflects an efficiency gain for growers of about 41 percent.\”

The 2014 report points out that the capacity of eggs per hatchery has continued to rise (again, references to charts omitted):

For several decades, the number of turkey hatcheries has declined steadily. During the last six years, however, this decrease began to slow down. As of 2013, there are 54 turkey hatcheries in the United States, down from 58 in 2008, but up from the historical low of 49 reached in 2012. The total capacity of these facilities remained steady during this period at approximately 39.4 million eggs. The average capacity per hatchery reached a record high in 2012. During 2013, average capacity per hatchery was 730 thousand (data records are available from 1965 to present).

U.S. agriculture is full of examples of remarkable increases in yields over a few decades, but they always drop my jaw. I tend to think of a \”turkey\” as a product that doesn\’t have a lot of opportunity for technological development, but clearly I\’m wrong. Here\’s a graph showing the rise in size of turkeys over time from the 2007 report.

The production of turkey remains an industry that is not very concentrated, with three relatively large producers and then more than a dozen mid-sized producers. Here\’s a list of top turkey producers in 2012 from the National Turkey Federation:

For some reason, this entire post is reminding me of the old line that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. Did I mention that I make an excellent chestnut stuffing?

Anyway, the starting point for measuring inflation is to define a relevant \”basket\” or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical U.S. household buys. But one can also define a more specific basket of goods if desired, and since 1986, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:

The cost of buying the Classic Thanksgiving Dinner rose by a  bit less than 1% in 2014, compared with 2013. The top line of the graph that follows shows the nominal price of purchasing the basket of goods for the Classic Thanksgiving Dinner. The lower line on the graph shows the price of the Classic Thanksgiving Dinner adjusted for the overall inflation rate in the economy. The line is relatively flat, especially since 1990 or so, which means that inflation in the Classic Thanksgiving Dinner has actually been a pretty good measure of the overall inflation rate.

Thanksgiving is a distinctively American holiday, and it\’s my favorite. Good food, good company, no presents–and all these good topics for conversation. What\’s not to like?

The Origins of the Value of a Statistical Life Concept

Like most economists, I sometimes find myself defending the crowd-pleasing position that it’s  possible for public policy purposes–and even unavoidably necessary–to put a monetary value on human life. For example, the current value of a “statistical life” used by the U.S. Department of Transportation is $9.1 million. If someone wants to try to understand this kind of calculation, rather than just railing at economists (and me in particular), a useful starting point is to consider the origins of this concept.  H. Spencer Banzhaf explains in “Retrospectives: The Cold-War Origins of the Value of Statistical Life,” in the Fall 2014 issue of the Journal of Economic Perspectives (28:4, 213-26). (As with all articles in JEP back to the first issue in 1987, the article is freely available on-line compliments of the American Economic Association. Full disclosure: I’ve been Managing Editor of JEP since the first issue.)

Banzhaf begins his story just after RAND Corporation had become an independent organization in 1948. Banzhaf explains what happened in one of its first big contracts (citations omitted):

The US Air Force asked RAND to apply systems analysis to design a first strike on the Soviets. ….  Paxson and RAND were initially proud of their optimization model and the computing power that they brought to bear on the problem, which crunched the numbers for over 400,000 configurations of bombs and bombers using hundreds of equations. The massive computations for each configuration involved simulated games at each enemy encounter, each of which had first been modeled in RAND’s new aerial combat research room. They also involved numerous variables for fighters, logistics, procurement, land bases, and so on. Completed in 1950, the study recommended that the United States fill the skies with numerous inexpensive and vulnerable propeller planes, many of them decoys carrying no nuclear weapons, to overwhelm the Soviet air defenses. Though losses would be high, the bombing objectives would be met. While RAND was initially proud of this work, pride and a haughty spirit often go before a fall. RAND’s patrons in the US Air Force, some of whom were always skeptical of the idea that pencil-necked academics could contribute to military strategy, were apoplectic. RAND had chosen a strategy that would result in high casualties, in part because the objective function had given zero weight to the lives of airplane crews.

RAND quickly backpedalled on the study, and instead moved to a more cautious approach which spelled out a range of choices: for example, some choices might cost more in money but be expected to have fewer deaths, while other choices might cost less in money but be expected to have more deaths. The idea was that the think tank identified the range of choices, and the generals would chooose among them. But of course, financial resources were limited by political considerations, and so the choices made by the military would typically need to involve some number of deaths that was higher than the theoretical minimum–if more money had been available. In that sense, spelling out a range of tradeoffs also spelled out the monetary value that would be put on lives lost.

In 1963, Jack Carlson was a former Air Force pilot who wrote his dissertation, entitled “The Value of Life Saving,” with Thomas Schelling as one of his advisers. Carlson pointed out that a range of public policy choices involved putting an implicit value on a life. Banzhaf writes:

Life saving, he [Carlson] wrote, is an economic activity because it involves making choices with scarce resources. For example, he noted that the construction of certain dams resulted in a net loss of lives (more than were expected to be saved from flood control), but, in proceeding with the projects, the public authorities revealed that they viewed those costs as justified by the benefit of increased hydroelectric power and irrigated land. …

Carlson considered the willingness of the US Air Force to trade off costs and machines to save men in two specific applications. One was the recommended emergency procedures when pilots lost control of the artificial “feel” in their flight control systems. A manual provided guidance on when to eject and when to attempt to land the aircraft, procedures which were expected to save the lives of some pilots at the cost of increasing the number of aircraft that would be lost. This approach yielded a lower bound on the value of life of $270,000, which Carlson concluded was easily justified by the human capital cost of training pilots. (Note the estimate was a lower bound, as the manual revealed, in specifying what choices to make, that lives were worth at least that much.) Carlson’s other application was the capsule ejection system for a B-58 bomber. The US Air Force had initially estimated that it would cost $80 million to design an ejection system. Assuming a range of typical cost over-runs and annual costs for maintenance and depreciation, and assuming 1–3 lives would be saved by the system annually, Carlson (p. 92) estimated that in making the investment the USAF revealed its “money valuation of pilots’ lives” to be at least $1.17 million to $9.0 million. (Although this was much higher than the estimate from the ejection manual, the two estimates, being lower bounds, were not necessarily inconsistent.)

Thomas Schelling (who shared the Nobel prize in economics in 2005 for his work in game theory) explicitly introduced the “value of a statistical life” concept in a 1968 essay called “The Life You Save May Be Your Own” (and thus re-using the title of a Flannery O’Connor short story), which appeared in a book called Problems in Public Expenditure Analysis, edited by Samuel B. Chase, Jr. Schelling pointed out that the earlier formulations of how to value a life were based on the technical tradeoffs from the costs of building dams or aircraft, and the judgements of politicians and generals. Schelling instead proposed a finesse. The value of a life would actually be based on how consumers actually react to the risks that they face in everyday life. Schelling wrote:

“Death is indeed different from most consumer events, and its avoidance different from most commodities. . . . But people have been dying for as long as they have been living; and where life and death are concerned we are all consumers. We nearly all want our lives extended and are probably willing to pay for it. It is worth while to remind ourselves that the people whose lives may be saved should have something to say about the value of the enterprise and that we analysts, however detached, are not immortal ourselves.”

And how can we observe what people are willing to pay to avoid risks? Researchers can look at studies of the extra pay that is required for workers (including soldiers) to do exceptionally dangerous jobs. They can look at what people are willing to pay for safety equipment. Policy-makers can then say something like: “If people need to be paid extra amount X to avoid a certain amount of risk on the job, or are willing to pay an extra amount Y to reduce some other risk, then the government should also use those values when thinking about whether certain steps to reduce the health risks of air pollution or traffic accidents are worth the cost.” Banzhaf writes:

Schelling’s (1968) crucial insight was that economists could evade the moral thicket of valuing “life” and instead focus on people’s willingness to trade off money for small risks. For example, a policy to reduce air pollution in a city of one million people that reduces the risk of premature death by one in 500,000 for each person would be expected to save two lives over the affected population. But from the individuals’ perspectives, the policy only reduces their risks of death by 0.0002 percentage points. This distinction is widely recognized as the critical intellectual move supporting the introduction of values for (risks to) life and safety  into applied benefit–cost analysis. Although it is based on valuing risk reductions, not lives, the value of a statistical life concept maintains an important rhetorical link to the value of life insofar as it normalizes the risks to value them on a “per-life” basis. By finessing the distinction between lives and risks in this way, the VSL concept overcame the political problems of valuing life while remaining relevant to policy questions.

Thus, when an economist or policy-maker says that a life is worth $9 million, they don\’t mean that lots of people are willing to sell their life for a $9 million check. Instead, they mean that if a public policy intervention could reduce the risk of death in a way that on average would save one life in a city of 9 million people (or alternatively, reduce the risk of death in a way that would save 10 lives in a city of 900,000 people), then the policy is worth undertaking. In turn, that willingness to pay for risk reduction is based on the actual choices that people make in trading money and risk.

Underutilized Labor in the U.S. Economy

A few weeks back I explained \”Why Different Unemployment Measures Tell (Mostly) the Same Story.\” The basic theme was that while you can define unemployment in ways that make the level at a given time higher or lower, these different measures (mostly) rise and fall together. A common reaction in my in-box was along these lines: \”OK, the argument about unemployment rates is fair. But isn\’t the real problem the fall in labor force participation, which isn\’t captured in the unemployment rate?\” Gerald Mayer offers some insight about this broader question in \”The increased supply of underutilized labor from 2006 to 2014,\” which appears in the November 2014 issue of the Monthly Labor Review, published by the U.S. Bureau of Labor Statistics. There is also some interesting complementary analysis of the BLS data from Drew Desilver at the Pew Foundation in \”More and more Americans are outside the labor force entirely. Who are they?\”

Here is the concern in a nutshell. Sure, the unemployment rate has fallen from its peak of 10% in October 2009 to 5.8% in October 2014. Here\’s the definition of unemployment, according to the BLS: \”People are classified as unemployed if they do not have a job, have actively looked for work in the prior 4 weeks, and are currently available for work.\”

This definition of unemployment makes some sense. After all, you don\’t want to count a happily retired person or a happily stay-at-home spouse as being \”unemployed.\” Also, it\’s worth noting that as long as you tell the government survey that you are trying to find a job, you continue to be counted as unemployed, even if that period of unemployment lasts months or years. But the specific definition of unemployment also raises questions. In particular, what about a person who looked hard for a job six months ago, gave up in discouragement at the lack of opportunities, but would still like a job if one was available? This person is not counted in the unemployment rate, but instead is \”out of the labor force.

The share of Americans out of the labor force has been rising, as Mayer shows in this figure.  I\’ve explored some of the explanations for this phenomenon over the last couple of decades, like an aging population with more retirees and a rise in those receiving disability, in earlier posts (for example, here and here). But it raises the question of whether the quirks of measuring the official unemployment rate are missing out on people who are not being treated as out of the labor force.

Here, the point I would emphasize is that there are two main reasons someone might be out of the labor force. One possibility is that they don\’t want a job. Another possibility is that they want to work, but for whatever reason they haven\’t looked for a job in the last four weeks (perhaps because of illness, or being in a training program, or being discouraged about finding a job). The survey refers to this group as \”marginally attached\” to the labor force, who are defined this way: \”These are individuals without jobs who are not currently looking for work (and therefore are not counted as unemployed), but who nevertheless have demonstrated some degree of labor force attachment. Specifically, to be counted as marginally attached to the labor force, they must indicate that they currently want a job, have looked for work in the last 12 months (or since they last worked if they worked within the last 12 months), and are available for work.\” Here\’s how Desilver at the Pew Foundation breaks down the reasons that the marginally attached give for why they haven\’t looked for a job in the last four weeks:

marginally_attached

Bottom line: If many of those out of the labor force say that they don\’t currently want a job, then the unemployment rate is a pretty decent measure of underutilized labor in the U.S. economy. But if many of those out of the labor force want a job but are counted as marginally attached to the labor force, then the unemployment rate would be potentially deceptive.

The statistics on this point are clear enough. The marginally attached can account for about one-tenth of the decline in the labor force participation rate, according to Mayer at the BLS. Or as Desilver writes in the Pew Report: \”Last month, according to BLS, 85.9 million adults didn’t want a job now, or 93.3% of all adults not in the labor force.\”

Desilver also offers an interesting breakdown by age of those who say they don\’t want a job. Among those 55 and over, the share of those who say they don\’t want a job is falling. Among those in the 25-64 age bracket, it has edged up just a bit. But the age group with by far the biggest rise in those saying they don\’t want a job since 2000 is the 16-24 age group.

So what\’s the bottom line on the extent and patterns of underutilized labor in the U.S. economy? Here are my own conclusions.

1) The decline in the plain old meat-and-potatoes unemployment rate is the last few years is not primarily a result of discouraged or marginally attached workers leaving the labor force. Over 93% of those who are not in the labor force don\’t want a job right now. Of course, it\’s always possible that people who say they don\’t want a job might still be open to the idea of taking one if the right offer came along. But when people who say they don\’t want a job don\’t have a job, it\’s hard for me to regard it as a severe social problem.

2) Those who have part-time jobs are not counted as unemployed, even when they would prefer full-time employment. The number of part-timer who would like full-time work has remained well above pre-recession levels since the end of the Great Recession in 2009, so this is a clear-cut case where the fall in the unemployment rate isn\’t a good measure of underutilized labor.

3) The official unemployment rate doesn\’t look at the amount of time people are unemployed, but there is good reason to believe that when unemployment on average lasts longer, or when a larger share of the unemployed have been out a job for a substantial time, the costs both to the individual and to society are higher. Using the ever-helpful FRED website maintained by the Federal Reserve Bank of St. Louis,  here\’s a figure showing the average length of unemployment, how it spiked far beyond all previous post-World War II experience during the Great Recession, and hasn\’t yet fallen back into a normal range.

Similarly, here\’s a figure showing the number of civilians unemployed for more than 27 weeks. Again, the spike during the Great Recession was far beyond any other post-World War II experience, and the subsequent decline is not yet back into the normal range.

4) We are in the midst of a social change in which 16-24 year-olds are less likely to want jobs. Some of this is related to more students going on to higher education, as well as to a pattern where fewer high school and college student are looking for work. I do worry about this trend. For many folks of my generation, some evenings and summers spent in low-paid service jobs was part of our acculturation to the world of work. As I\’ve noted in the past, I would also favor a more active program of apprenticeships to help young people become connected to the world of work.

5) Overall, I wonder if the biggest underutilization of U.S. labor in quantitative terms is not any of these specific issues, but instead relates to the types of jobs available. Many people would not feel satisfied with just a job, any job. They would like to settle into a job that feels like part of a career, where they can build skills over time, get raises, receive health and retirement benefits, build up some status in the workplace, and have some control over their future employment path. The relatively slow growth of the U.S. economy, together with the rise in inequality of before-tax incomes and the declining share of workers who get health insurance and pension benefits through their employers, means that fewer jobs of this sort are available. Perhaps the biggest underemployment of U.S. labor is not among those who don\’t have jobs, but instead among those part-timers and full-timers who do have jobs but also have the capability to do so much more–if the overall economic environment offered greater support and encouragement.

India Rebounding?

India has a population of more than 1.2 billion, more than one-sixth of world population. A coupole of decades ago, economic growth started surging in India. For example, the share of the population that was essentially destitute, below the Indian government\’s poverty line fell from 45% in 1994 to 37% by 2005 and 22% by 2012.  But in 2012 and 2013, this growth rate stumbled. Can India\’s growth rebound?

Forsome analysis of the issue, the OECD has published its Economic Survey of India 2014. The 55-page overview chapter can be downloaded for free on-line; thematic chapters on manufacturing, the economic status of women, and health care can be read online. In addition, the World Bank has published its Indian Development Update for October 2014. Here\’s a figure showing that while India\’s GDP growth in the last couple of decades hasn\’t quite been at Chinese levels, it has exceeded emerging-market countries like Brazil and Indonesia, and been streets ahead of the high-income OECD countries.

Here\’s an overview of India\’s economic situation from the OECD:

India experienced strong inclusive growth between 2003 and 2011, with average growth
above 8% and the incidence of poverty cut in half. This reflected gains from past
structural reforms, strong capital inflows up to 2007 and expansionary fiscal and
monetary policies since 2009. These growth engines faltered in 2012. Stubbornly high
inflation as well as large current and fiscal deficits left little room for monetary and fiscal
stimulus to revive growth. …

In 2014, the economy has shown signs of a turnaround and imbalances have lessened.
Fiscal consolidation at the central government level has been accompanied by a decline in both inflation and the current account deficit. Confidence has been boosted by on-going reforms to the monetary policy framework, with more weight given to inflation. The large depreciation in the rupee has also helped revive exports. Industrial production has rebounded and business sentiment has surged, triggered by a decline in political uncertainty.  …  

Structural reforms would raise India’s economic growth. In their absence, however, growth will remain below the 8% growth rate achieved during the previous decade . Infrastructure bottlenecks, a cumbersome business environment, complex and distorting taxes, inadequate education and training, and outdated labour laws are increasingly impeding growth and job creation. Female economic participation remains exceptionally low, holding down incomes and resulting in severe gender inequalities. Although absolute poverty has declined, it remains high, and income inequality has in fact risen since the early 1990s. Inefficient subsidy programmes for food, energy and fertilisers
have increased steadily while public spending on health care and education has remained low.

For an encyclopedic overview of macro and micro issues for India, I commend your attention to the reports above. But here are three themes that caught my eye: inefficient subsidies, the need for labor law reform, and problems with the transportation grid.

Inefficient subsidies that don\’t much help the poor

Like many countries of all income levels, India subsidizes certain goods at considerable cost. The value of the subsidies to food, energy, and the like mainly flow to the middle class, not the poor. The OECD notes: \”For rice and wheat, leakages in the food subsidy, including widespread diversion to the black market, have been estimated by Gulati et al. (2012) at 40%, and up to 55% by Jha and
Ramaswami (2011). According to Jha and Ramaswami (2011), the poor benefit from only around 10% of the spending on food subsidy. …  For oil, Anand et al. (2013) estimated that the implicit subsidy is 7 times higher for the richest 10% of households than for the poorest 10%.\” The energy subsidies in particular are likely to be lower in the next few years because of lower global oil prices. But here\’s a figure showing the cost of these subsidies for food, fertilizer, and oil–India is encouraging the better off to burn fossil fuels while skimping on government provision of health care.

A need for reform of labor laws

India has a problem with overly restrictive labor laws. The OECD puts together an index with a bunch of measures of how protected workers are. On a scale from 0-6, the U.S. measures about 0.5; the measure for the high-income OECD countries is roughly 2;  and the measure for India exceeds 3. Many of these rules only apply to firms that hire more than a certain number of people, like 10 or 100. As a result, firms in India hesitate to grow, relying instead on networks of tiny firms and temporary workers. The OECD explains:

The vast majority of workers, particularly those in agriculture and the service sector, are not covered by core labour laws. In manufacturing, NSSO data suggest that about 65% of jobs were in firms with less than 10 employees in 2012 (Mehrotra et al., 2014) – the so-called “unorganised sector” – and thus not covered by Employment protection legislation (EPL) and many other core labour laws which apply only to larger firms.In addition, the Annual Survey of Industries (ASI) reveals that of those working in the organised manufacturing sector (more than 10 employees) 13% were on temporary
contracts or employed by a sub-contractor (“contract labour”) in 2010, up from 8% in 2000. Contract workers are also not covered by key employment or social protection regulations. … A comprehensive labour law to consolidate, modernise and simplify existing regulations would allow firms to expand employment and output, and would be more enforceable, thereby extending social protection to more workers. One option would be to create a labour contract for new permanent jobs with less stringent employment protection legislation but with basic rights – standard hours of work, holidays, minimum safety standards and maternity benefits – for all workers irrespective of the firm size. 

 A need to improve the transportation system
The World Bank points out that transporting goods across India is time-consuming for all kinds of reasons, in a way that hinders economic coordination and growth. In its discussion of truck transportation (although rail and port shipping have similar issues), the report notes:

Road traffic accounts for about 60 percent of all freight traffic in India. Yet, the average speed of a truck on a highway is reported to be just 20-40 km/hour and trucks travel on average 250-300 km per day (compared to 450 km in Brazil and 800 km in the United States). Road conditions play a role in the slow pace of movement of goods, as does the generally poor condition of vehicles. Over one-third of trucks in India are more than 10 years old …

Besides road quality, the next most frequently cited causes for freight delays are customs inefficiencies and state border check-post clearances. A number of studies in the last few years have found that for up to 60 percent of journey time, the truck is not moving. Approximately 15-20 percent of the total journey time is made up of rest and meals; another around 15 percent at toll plazas; and the balance, roughly a quarter of the journey time, is spent at check posts, state borders, city entrances, and other regulatory stoppages. …

Over 650 checkpoints slow freight traffic at state borders. The checkpoints are tasked primarily with reconciliation of central versus state sales taxes in one state with those in the other, as well as checking for road permits and associated road tax compliance, collecting and checking for other local taxes, clearances, as well as checks for and imposition of taxes on or prohibition of the movement of specific types of goods, such as alcoholic products (for state excise taxes) and mineral products (for royalties). …
The potential gains of more efficient and reliable supply chains are enormous. Simply halving the delays due to road blocks, tolls and other stoppages could cut freight times by some 20-30 percent, and logistics costs by even more, as much as 30-40 percent. This would be tantamount to a gain in competitiveness of some 3-4 percent of net sales for key manufacturing sectors … 

India has enormous potential for economic growth. As someone commented a few years back, the country is \”half southern California, half sub-Saharan Africa.\” Every country has political and social barriers that can lead to rules and regulations that limit economic growth, but India seems to have more than its fair share of such obstacles–which is in part why the gains from reducing these obstacles can be so large.

For a previous look at this topic, see \”India\’s Economic Growth: Issues, Puzzles, Sustainability\” from January 3, 2012.