College freshmen typically study about 15 hours per week, according to the just-released National Survey of Student Engagement Annual Report 2014. The report is a wealth of information about different how different kinds of learning occur in colleges and universities, how faculty spend their time, and how students spend their time. But here\’s the figure that jumped out at me.
The focus of the figure is that freshmen who outperform their own expectations for grades study about 17 hours per week, while those who underperform their expectations average about 14 hours of studying per week. For the average student, the issue isn\’t that they are working lots of hours at a paid job, although the underperforming students seem to work just a little more on average.
The survey has been carried out for some years by researchers at the Indiana University Center for Postsecondary Research, and is funded by the Carnegie Foundation for the Advancement of Teaching. Survey results are based on \”355,000 census-administered or randomly sampled first-year and senior students attending 622 U.S. bachelor’s degree-granting institutions that participated in NSSE in spring 2014.\”
At the risk of sounding all grinchy on this topic 14 or 17 hours per week of studying doesn\’t seem to me nearly enough. Over a seven-day week, this is an average of 2 or 2 1/2 hours per day of studying. And remember that this data is based on surveying students, so my guess is that if there\’s any bias in this estimate, it\’s more likely to involve students overestimating their actual study time rather than underestimating it.
My usual advice to undergraduate students is that I think of the academic side of college as a time commitment equal to a full-time job or a bit more–say, a time commitment of 40-50 hours per week, including class time. So if you are actually in class for maybe 12-15 hours per week, then a student should be expecting to put in perhaps another 30 hours of study time each week for all classes, on average. The good news here, I guess, is that if you are student and willing to work the extra hours, you are likely to stand out over a lot of your fellow-students who aren\’t willing to make the commitment. The bad news is that a lot of colleges and universities don\’t seem to be asking much of their students, and their students are living down to those expectations.
The \”working-age population\” is often defined as those from age 15-64. For many high-income and emerging market economies, the growth in the size of the working-age population has been dropping. Indeed, in Japan the working-age population started contracting back in the mid-1990s; the size of the working age population in the European Union (excluding the UK) started contracting about 2010; and the working age population in China (after several decades of a one-child policy) is projected to start contracting in the next few years. For most high-income countries, the working-age share of the population is in decline.
For example, Hanson said: \”[F]or our purpose we may say that the constituent elements of economic progress are (a) inventions, (b) the discovery and development of new territory and new resources, and (c) the growth of population. Each of these in turn, severally and in combination, has opened investment outlets and caused a rapid growth of capital formation.\” Hansen then noted that population growth had slowed down and that US territory was no longer expanding. Thus, he argued: \”We are thus rapidly entering a world in which we must fall back upon a more rapid advance of technology than in the past if we are to find private investment opportunities adequate to maintain full employment. … It is my growing conviction that the combined effect of the decline in population growth, together with the failure of any really important innovations of a magnitude sufficient to absorb large capital outlays, weighs very heavily as an explanation for the failure of the recent recovery to reach full employment.\”
For a sense of the slowdown in the growth rate of the working-age population since 1970 and in the next few decades, here\’s a figure from the \”Free Exchange\” column in the November 22 issue of the Economist. The U.S. economy, with a relatively high birthrate and relatively high levels off immigration, is projected to have slower growth of its working-age popoulation–but not an actual decline.
Here\’s a figure from a November 14 post by \”the data team\” at the Economist blog, showing the share of the population of working age. Notice that for Germany and Japan, the share of the population in the 15-64 age bracket peaked more than two decades ago. For the U.S., the peak in the working-age population is only a couple of years back. All of the high-income countries shown here are projected to have a sharp decline in the working-age share of the popoulation in the next few decades, although the share in the U.S. is projected to remain the largest.
Why might a lower working-age population lead to a slower rate of economic growth? One reason is just mechanical: that is, Other things at least roughly equal, a 1% rise in the number of workers will add about 1% to the GDP. But this factor only means that we should focus on the growth of per capita or per worker GDP, thus adjusting for the slower growth rate.
Two other concerns are potentially more serious. First, when the working-age population is growing, firms have an ongoing necessity to expand their investment spending, just to keep up with the number of workers. Conversely, slower growth of the working-age population reduces incentives to invest. Second, if the slower-growing or relatively smaller working age population finds that it must bear a substantially higher tax burden to support the growing proportion of elderly, the disincentives to work could become a factor in slowing growth.
What are the broad policy implications of a slow-growing or relatively smaller working age population? U.S. investment levels have indeed been lower than expected in recent years, given that the Great Recession officially ended back in mid-2009. Following in the lines of Alvin Hansen\’s 1938 discussion, one can imagine three possibilities for minimizing the risk of secular stagnation.
First, one might try to avoid the population decline. Government support for family-friendly policies hasn\’t had much substantial effect on falling birthrates across high-income countries. But there are other possibilities. The United States has relatively open border to legal immigration, not to mention illegal immigration, which increases the working-age population. Also, one can imagine expanding the definition of \”working age\” so that it considers workers in the 65-75 age range. A number of steps could be taken to encourage a larger share of these workers to remain in the workforce, at least part time.
Second, Hansen emphasized \”the discovery and development of new territory and new resources.\” Substantial discoveries of new territory seem implausible, but the possibilities of expanding trade by active participation in the globalizing economy remain viable. Also, the U.S. economy has the capacity for a considerable expansion of its energy resources. As I have argued in an earlier post, I personally favor what I like to call \”The Drill-Baby Carbon Tax: A Grand Compromise on Energy Policy.\” Such a policy would move ahead with all deliberate speed both on developing U.S. energy resources and also on finding ways to reduce carbon and other emissions.
When the working-age population is growing, it stirs up the economy in a way that is often conducive to growth. With growth of the working-age population slowing, alternative ways of stirring up the economy and encouraging growth become even more important.
As Thanksgiving preparations arrive, I naturally find my thoughts veering to the evolution of demand for turkey, technological change turkey production, market concentration in the turkey industry, and price indexes for a classic Thanksgiving dinner. Not that there\’s anything wrong with that. (Note: This is an updated and amended version of a post that was first published on Thanksgiving Day 2011.)
On the demand side, the quantity of turkey per person consumed rose dramatically from the mid-1970s up to about 1990, but since then has declined somewhat. The figure below is from the Eatturkey.com website run by the National Turkey Federation. Apparently, the Classic Thanksgiving Dinner is becoming slightly less widespread.
On the production side, the National Turkey Federation explains: \”Turkey companies are vertically integrated, meaning they control or contract for all phases of production and processing – from breeding through delivery to retail.\” However, production of turkeys has shifted substantially, away from a model in which turkeys were hatched and raised all in one place, and toward a model in which all the steps of turkey production have become separated and specialized–with some of these steps happening at much larger scale. The result has been an efficiency gain in the production of turkeys. Here is some commentary from the 2007 USDA report, with references to charts omitted for readability:
\”In 1975, there were 180 turkey hatcheries in the United States compared with 55 operations in 2007, or 31 percent of the 1975 hatcheries. Incubator capacity in 1975 was 41.9 million eggs, compared with 38.7 million eggs in 2007. Hatchery intensity increased from an average 33 thousand egg capacity per hatchery in 1975 to 704 thousand egg capacity per hatchery in 2007.
Some decades ago, turkeys were historically hatched and raised on the same operation and either slaughtered on or close to where they were raised. Historically, operations owned the parent stock of the turkeys they raised while supplying their own eggs. The increase in technology and mastery of turkey breeding has led to highly specialized operations. Each production process of the turkey industry is now mainly represented by various specialized operations.
Eggs are produced at laying facilities, some of which have had the same genetic turkey breed for more than a century. Eggs are immediately shipped to hatcheries and set in incubators. Once the poults are hatched, they are then typically shipped to a brooder barn. As poults mature, they are moved to growout facilities until they reach slaughter weight. Some operations use the same building for the entire growout process of turkeys. Once the turkeys reach slaughter weight, they are shipped to slaughter facilities and processed for meat products or sold as whole birds.
Turkeys have been carefully bred to become the efficient meat producers they are today. In 1986, a turkey weighed an average of 20.0 pounds. This average has increased to 28.2 pounds per bird in 2006. The increase in bird weight reflects an efficiency gain for growers of about 41 percent.\”
The 2014 report points out that the capacity of eggs per hatchery has continued to rise (again, references to charts omitted):
For several decades, the number of turkey hatcheries has declined steadily. During the last six years, however, this decrease began to slow down. As of 2013, there are 54 turkey hatcheries in the United States, down from 58 in 2008, but up from the historical low of 49 reached in 2012. The total capacity of these facilities remained steady during this period at approximately 39.4 million eggs. The average capacity per hatchery reached a record high in 2012. During 2013, average capacity per hatchery was 730 thousand (data records are available from 1965 to present).
U.S. agriculture is full of examples of remarkable increases in yields over a few decades, but they always drop my jaw. I tend to think of a \”turkey\” as a product that doesn\’t have a lot of opportunity for technological development, but clearly I\’m wrong. Here\’s a graph showing the rise in size of turkeys over time from the 2007 report.
The production of turkey remains an industry that is not very concentrated, with three relatively large producers and then more than a dozen mid-sized producers. Here\’s a list of top turkey producers in 2012 from the National Turkey Federation:
For some reason, this entire post is reminding me of the old line that if you want to have free-flowing and cordial conversation at dinner party, never seat two economists beside each other. Did I mention that I make an excellent chestnut stuffing?
Anyway, the starting point for measuring inflation is to define a relevant \”basket\” or group of goods, and then to track how the price of this basket of goods changes over time. When the Bureau of Labor Statistics measures the Consumer Price Index, the basket of goods is defined as what a typical U.S. household buys. But one can also define a more specific basket of goods if desired, and since 1986, the American Farm Bureau Federation has been using more than 100 shoppers in states across the country to estimate the cost of purchasing a Thanksgiving dinner. The basket of goods for their Classic Thanksgiving Dinner Price Index looks like this:
The cost of buying the Classic Thanksgiving Dinner rose by a bit less than 1% in 2014, compared with 2013. The top line of the graph that follows shows the nominal price of purchasing the basket of goods for the Classic Thanksgiving Dinner. The lower line on the graph shows the price of the Classic Thanksgiving Dinner adjusted for the overall inflation rate in the economy. The line is relatively flat, especially since 1990 or so, which means that inflation in the Classic Thanksgiving Dinner has actually been a pretty good measure of the overall inflation rate.
Thanksgiving is a distinctively American holiday, and it\’s my favorite. Good food, good company, no presents–and all these good topics for conversation. What\’s not to like?
Banzhaf begins his story just after RAND Corporation had become an independent organization in 1948. Banzhaf explains what happened in one of its first big contracts (citations omitted):
The US Air Force asked RAND to apply systems analysis to design a first strike on the Soviets. …. Paxson and RAND were initially proud of their optimization model and the computing power that they brought to bear on the problem, which crunched the numbers for over 400,000 configurations of bombs and bombers using hundreds of equations. The massive computations for each configuration involved simulated games at each enemy encounter, each of which had first been modeled in RAND’s new aerial combat research room. They also involved numerous variables for fighters, logistics, procurement, land bases, and so on. Completed in 1950, the study recommended that the United States fill the skies with numerous inexpensive and vulnerable propeller planes, many of them decoys carrying no nuclear weapons, to overwhelm the Soviet air defenses. Though losses would be high, the bombing objectives would be met. While RAND was initially proud of this work, pride and a haughty spirit often go before a fall. RAND’s patrons in the US Air Force, some of whom were always skeptical of the idea that pencil-necked academics could contribute to military strategy, were apoplectic. RAND had chosen a strategy that would result in high casualties, in part because the objective function had given zero weight to the lives of airplane crews.
RAND quickly backpedalled on the study, and instead moved to a more cautious approach which spelled out a range of choices: for example, some choices might cost more in money but be expected to have fewer deaths, while other choices might cost less in money but be expected to have more deaths. The idea was that the think tank identified the range of choices, and the generals would chooose among them. But of course, financial resources were limited by political considerations, and so the choices made by the military would typically need to involve some number of deaths that was higher than the theoretical minimum–if more money had been available. In that sense, spelling out a range of tradeoffs also spelled out the monetary value that would be put on lives lost.
In 1963, Jack Carlson was a former Air Force pilot who wrote his dissertation, entitled “The Value of Life Saving,” with Thomas Schelling as one of his advisers. Carlson pointed out that a range of public policy choices involved putting an implicit value on a life. Banzhaf writes:
Life saving, he [Carlson] wrote, is an economic activity because it involves making choices with scarce resources. For example, he noted that the construction of certain dams resulted in a net loss of lives (more than were expected to be saved from flood control), but, in proceeding with the projects, the public authorities revealed that they viewed those costs as justified by the benefit of increased hydroelectric power and irrigated land. …
Carlson considered the willingness of the US Air Force to trade off costs and machines to save men in two specific applications. One was the recommended emergency procedures when pilots lost control of the artificial “feel” in their flight control systems. A manual provided guidance on when to eject and when to attempt to land the aircraft, procedures which were expected to save the lives of some pilots at the cost of increasing the number of aircraft that would be lost. This approach yielded a lower bound on the value of life of $270,000, which Carlson concluded was easily justified by the human capital cost of training pilots. (Note the estimate was a lower bound, as the manual revealed, in specifying what choices to make, that lives were worth at least that much.) Carlson’s other application was the capsule ejection system for a B-58 bomber. The US Air Force had initially estimated that it would cost $80 million to design an ejection system. Assuming a range of typical cost over-runs and annual costs for maintenance and depreciation, and assuming 1–3 lives would be saved by the system annually, Carlson (p. 92) estimated that in making the investment the USAF revealed its “money valuation of pilots’ lives” to be at least $1.17 million to $9.0 million. (Although this was much higher than the estimate from the ejection manual, the two estimates, being lower bounds, were not necessarily inconsistent.)
Thomas Schelling (who shared the Nobel prize in economics in 2005 for his work in game theory) explicitly introduced the \”value of a statistical life\” concept in a 1968 essay called “The Life You Save May Be Your Own” (and thus re-using the title of a Flannery O\’Connor short story), which appeared in a book called Problems in Public Expenditure Analysis, edited by Samuel B. Chase, Jr. Schelling pointed out that the earlier formulations of how to value a life were based on the technical tradeoffs from the costs of building dams or aircraft, and the judgements of politicians and generals. Schelling instead proposed a finesse. The value of a life would actually be based on how consumers actually react to the risks that they face in everyday life. Schelling wrote:
\”Death is indeed different from most consumer events, and its avoidance different from most commodities. . . . But people have been dying for as long as they have been living; and where life and death are concerned we are all consumers. We nearly all want our lives extended and are probably willing to pay for it. It is worth while to remind ourselves that the people whose lives may be saved should have something to say about the value of the enterprise and that we analysts, however detached, are not immortal ourselves.\”
And how can we observe what people are willing to pay to avoid risks? Researchers can look at studies of the extra pay that is required for workers (including soldiers) to do exceptionally dangerous jobs. They can look at what people are willing to pay for safety equipment. Policy-makers can then say something like: \”If people need to be paid extra amount X to avoid a certain amount of risk on the job, or are willing to pay an extra amount Y to reduce some other risk, then the government should also use those values when thinking about whether certain steps to reduce the health risks of air pollution or traffic accidents are worth the cost.\” Banzhaf writes:
Schelling’s (1968) crucial insight was that economists could evade the moral thicket of valuing “life” and instead focus on people’s willingness to trade off money for small risks. For example, a policy to reduce air pollution in a city of one million people that reduces the risk of premature death by one in 500,000 for each person would be expected to save two lives over the affected population. But from the individuals’ perspectives, the policy only reduces their risks of death by 0.0002 percentage points. This distinction is widely recognized as the critical intellectual move supporting the introduction of values for (risks to) life and safety into applied benefit–cost analysis. Although it is based on valuing risk reductions, not lives, the value of a statistical life concept maintains an important rhetorical link to the value of life insofar as it normalizes the risks to value them on a “per-life” basis. By finessing the distinction between lives and risks in this way, the VSL concept overcame the political problems of valuing life while remaining relevant to policy questions.
Thus, when an economist or policy-maker says that a life is worth $9 million, they don\’t mean that lots of people are willing to sell their life for a $9 million check. Instead, they mean that if a public policy intervention could reduce the risk of death in a way that on average would save one life in a city of 9 million people (or alternatively, reduce the risk of death in a way that would save 10 lives in a city of 900,000 people), then the policy is worth undertaking. In turn, that willingness to pay for risk reduction is based on the actual choices that people make in trading money and risk.
Here is the concern in a nutshell. Sure, the unemployment rate has fallen from its peak of 10% in October 2009 to 5.8% in October 2014. Here\’s the definition of unemployment, according to the BLS: \”People are classified as unemployed if they do not have a job, have actively looked for work in the prior 4 weeks, and are currently available for work.\”
This definition of unemployment makes some sense. After all, you don\’t want to count a happily retired person or a happily stay-at-home spouse as being \”unemployed.\” Also, it\’s worth noting that as long as you tell the government survey that you are trying to find a job, you continue to be counted as unemployed, even if that period of unemployment lasts months or years. But the specific definition of unemployment also raises questions. In particular, what about a person who looked hard for a job six months ago, gave up in discouragement at the lack of opportunities, but would still like a job if one was available? This person is not counted in the unemployment rate, but instead is \”out of the labor force.
The share of Americans out of the labor force has been rising, as Mayer shows in this figure. I\’ve explored some of the explanations for this phenomenon over the last couple of decades, like an aging population with more retirees and a rise in those receiving disability, in earlier posts (for example, here and here). But it raises the question of whether the quirks of measuring the official unemployment rate are missing out on people who are not being treated as out of the labor force.
Here, the point I would emphasize is that there are two main reasons someone might be out of the labor force. One possibility is that they don\’t want a job. Another possibility is that they want to work, but for whatever reason they haven\’t looked for a job in the last four weeks (perhaps because of illness, or being in a training program, or being discouraged about finding a job). The survey refers to this group as \”marginally attached\” to the labor force, who are defined this way: \”These are individuals without jobs who are not currently looking for work (and therefore are not counted as unemployed), but who nevertheless have demonstrated some degree of labor force attachment. Specifically, to be counted as marginally attached to the labor force, they must indicate that they currently want a job, have looked for work in the last 12 months (or since they last worked if they worked within the last 12 months), and are available for work.\” Here\’s how Desilver at the Pew Foundation breaks down the reasons that the marginally attached give for why they haven\’t looked for a job in the last four weeks:
Bottom line: If many of those out of the labor force say that they don\’t currently want a job, then the unemployment rate is a pretty decent measure of underutilized labor in the U.S. economy. But if many of those out of the labor force want a job but are counted as marginally attached to the labor force, then the unemployment rate would be potentially deceptive.
The statistics on this point are clear enough. The marginally attached can account for about one-tenth of the decline in the labor force participation rate, according to Mayer at the BLS. Or as Desilver writes in the Pew Report: \”Last month, according to BLS, 85.9 million adults didn’t want a job now, or 93.3% of all adults not in the labor force.\”
Desilver also offers an interesting breakdown by age of those who say they don\’t want a job. Among those 55 and over, the share of those who say they don\’t want a job is falling. Among those in the 25-64 age bracket, it has edged up just a bit. But the age group with by far the biggest rise in those saying they don\’t want a job since 2000 is the 16-24 age group.
So what\’s the bottom line on the extent and patterns of underutilized labor in the U.S. economy? Here are my own conclusions.
1) The decline in the plain old meat-and-potatoes unemployment rate is the last few years is not primarily a result of discouraged or marginally attached workers leaving the labor force. Over 93% of those who are not in the labor force don\’t want a job right now. Of course, it\’s always possible that people who say they don\’t want a job might still be open to the idea of taking one if the right offer came along. But when people who say they don\’t want a job don\’t have a job, it\’s hard for me to regard it as a severe social problem.
3) The official unemployment rate doesn\’t look at the amount of time people are unemployed, but there is good reason to believe that when unemployment on average lasts longer, or when a larger share of the unemployed have been out a job for a substantial time, the costs both to the individual and to society are higher. Using the ever-helpful FRED website maintained by the Federal Reserve Bank of St. Louis, here\’s a figure showing the average length of unemployment, how it spiked far beyond all previous post-World War II experience during the Great Recession, and hasn\’t yet fallen back into a normal range.
Similarly, here\’s a figure showing the number of civilians unemployed for more than 27 weeks. Again, the spike during the Great Recession was far beyond any other post-World War II experience, and the subsequent decline is not yet back into the normal range.
4) We are in the midst of a social change in which 16-24 year-olds are less likely to want jobs. Some of this is related to more students going on to higher education, as well as to a pattern where fewer high school and college student are looking for work. I do worry about this trend. For many folks of my generation, some evenings and summers spent in low-paid service jobs was part of our acculturation to the world of work. As I\’ve noted in the past, I would also favor a more active program of apprenticeships to help young people become connected to the world of work.
5) Overall, I wonder if the biggest underutilization of U.S. labor in quantitative terms is not any of these specific issues, but instead relates to the types of jobs available. Many people would not feel satisfied with just a job, any job. They would like to settle into a job that feels like part of a career, where they can build skills over time, get raises, receive health and retirement benefits, build up some status in the workplace, and have some control over their future employment path. The relatively slow growth of the U.S. economy, together with the rise in inequality of before-tax incomes and the declining share of workers who get health insurance and pension benefits through their employers, means that fewer jobs of this sort are available. Perhaps the biggest underemployment of U.S. labor is not among those who don\’t have jobs, but instead among those part-timers and full-timers who do have jobs but also have the capability to do so much more–if the overall economic environment offered greater support and encouragement.
Forsome analysis of the issue, the OECD has published its Economic Survey of India 2014. The 55-page overview chapter can be downloaded for free on-line; thematic chapters on manufacturing, the economic status of women, and health care can be read online. In addition, the World Bank has published its Indian Development Update for October 2014. Here\’s a figure showing that while India\’s GDP growth in the last couple of decades hasn\’t quite been at Chinese levels, it has exceeded emerging-market countries like Brazil and Indonesia, and been streets ahead of the high-income OECD countries.
Here\’s an overview of India\’s economic situation from the OECD:
India experienced strong inclusive growth between 2003 and 2011, with average growth above 8% and the incidence of poverty cut in half. This reflected gains from past structural reforms, strong capital inflows up to 2007 and expansionary fiscal and monetary policies since 2009. These growth engines faltered in 2012. Stubbornly high inflation as well as large current and fiscal deficits left little room for monetary and fiscal stimulus to revive growth. …
In 2014, the economy has shown signs of a turnaround and imbalances have lessened. Fiscal consolidation at the central government level has been accompanied by a decline in both inflation and the current account deficit. Confidence has been boosted by on-going reforms to the monetary policy framework, with more weight given to inflation. The large depreciation in the rupee has also helped revive exports. Industrial production has rebounded and business sentiment has surged, triggered by a decline in political uncertainty. …
Structural reforms would raise India’s economic growth. In their absence, however, growth will remain below the 8% growth rate achieved during the previous decade . Infrastructure bottlenecks, a cumbersome business environment, complex and distorting taxes, inadequate education and training, and outdated labour laws are increasingly impeding growth and job creation. Female economic participation remains exceptionally low, holding down incomes and resulting in severe gender inequalities. Although absolute poverty has declined, it remains high, and income inequality has in fact risen since the early 1990s. Inefficient subsidy programmes for food, energy and fertilisers have increased steadily while public spending on health care and education has remained low.
For an encyclopedic overview of macro and micro issues for India, I commend your attention to the reports above. But here are three themes that caught my eye: inefficient subsidies, the need for labor law reform, and problems with the transportation grid.
Inefficient subsidies that don\’t much help the poor
Like many countries of all income levels, India subsidizes certain goods at considerable cost. The value of the subsidies to food, energy, and the like mainly flow to the middle class, not the poor. The OECD notes: \”For rice and wheat, leakages in the food subsidy, including widespread diversion to the black market, have been estimated by Gulati et al. (2012) at 40%, and up to 55% by Jha and Ramaswami (2011). According to Jha and Ramaswami (2011), the poor benefit from only around 10% of the spending on food subsidy. … For oil, Anand et al. (2013) estimated that the implicit subsidy is 7 times higher for the richest 10% of households than for the poorest 10%.\” The energy subsidies in particular are likely to be lower in the next few years because of lower global oil prices. But here\’s a figure showing the cost of these subsidies for food, fertilizer, and oil–India is encouraging the better off to burn fossil fuels while skimping on government provision of health care.
A need for reform of labor laws
India has a problem with overly restrictive labor laws. The OECD puts together an index with a bunch of measures of how protected workers are. On a scale from 0-6, the U.S. measures about 0.5; the measure for the high-income OECD countries is roughly 2; and the measure for India exceeds 3. Many of these rules only apply to firms that hire more than a certain number of people, like 10 or 100. As a result, firms in India hesitate to grow, relying instead on networks of tiny firms and temporary workers. The OECD explains:
The vast majority of workers, particularly those in agriculture and the service sector, are not covered by core labour laws. In manufacturing, NSSO data suggest that about 65% of jobs were in firms with less than 10 employees in 2012 (Mehrotra et al., 2014) – the so-called “unorganised sector” – and thus not covered by Employment protection legislation (EPL) and many other core labour laws which apply only to larger firms.In addition, the Annual Survey of Industries (ASI) reveals that of those working in the organised manufacturing sector (more than 10 employees) 13% were on temporary contracts or employed by a sub-contractor (“contract labour”) in 2010, up from 8% in 2000. Contract workers are also not covered by key employment or social protection regulations. … A comprehensive labour law to consolidate, modernise and simplify existing regulations would allow firms to expand employment and output, and would be more enforceable, thereby extending social protection to more workers. One option would be to create a labour contract for new permanent jobs with less stringent employment protection legislation but with basic rights – standard hours of work, holidays, minimum safety standards and maternity benefits – for all workers irrespective of the firm size.
A need to improve the transportation system The World Bank points out that transporting goods across India is time-consuming for all kinds of reasons, in a way that hinders economic coordination and growth. In its discussion of truck transportation (although rail and port shipping have similar issues), the report notes:
Road traffic accounts for about 60 percent of all freight traffic in India. Yet, the average speed of a truck on a highway is reported to be just 20-40 km/hour and trucks travel on average 250-300 km per day (compared to 450 km in Brazil and 800 km in the United States). Road conditions play a role in the slow pace of movement of goods, as does the generally poor condition of vehicles. Over one-third of trucks in India are more than 10 years old …
Besides road quality, the next most frequently cited causes for freight delays are customs inefficiencies and state border check-post clearances. A number of studies in the last few years have found that for up to 60 percent of journey time, the truck is not moving. Approximately 15-20 percent of the total journey time is made up of rest and meals; another around 15 percent at toll plazas; and the balance, roughly a quarter of the journey time, is spent at check posts, state borders, city entrances, and other regulatory stoppages. …
Over 650 checkpoints slow freight traffic at state borders. The checkpoints are tasked primarily with reconciliation of central versus state sales taxes in one state with those in the other, as well as checking for road permits and associated road tax compliance, collecting and checking for other local taxes, clearances, as well as checks for and imposition of taxes on or prohibition of the movement of specific types of goods, such as alcoholic products (for state excise taxes) and mineral products (for royalties). … The potential gains of more efficient and reliable supply chains are enormous. Simply halving the delays due to road blocks, tolls and other stoppages could cut freight times by some 20-30 percent, and logistics costs by even more, as much as 30-40 percent. This would be tantamount to a gain in competitiveness of some 3-4 percent of net sales for key manufacturing sectors …
India has enormous potential for economic growth. As someone commented a few years back, the country is \”half southern California, half sub-Saharan Africa.\” Every country has political and social barriers that can lead to rules and regulations that limit economic growth, but India seems to have more than its fair share of such obstacles–which is in part why the gains from reducing these obstacles can be so large.
Consider two approaches to encouraging those with low skills to be fully engaged in the workplace. The American approach focuses on keeping tax rates low and thus providing a greater financial incentive for people to take jobs. The Scandinavian approach focuses on providing a broad range of day care, education, and other services to support working families, but then imposes high tax rates to pay for it all. In the most recent issue of the Journal of Economic Perspectives, Henrik Jacobsen Kleven contrasts these two models in \”How Can Scandinavians Tax So Much?\” (28:4, 77-98). Kleven is from Denmark, so perhaps his conclusion is predictable. But the analysis along the way is intriguing.
As a starting point, consider what Kleven calls the \”participation tax rate.\” When an average worker in a country takes a job, how much will the money they earn increase their standard of living? The answer will depend on two factors: any taxes imposed on what they earn, including, income, payroll, and sales taxes; and also the loss of any government benefits for which they become less eligible or ineligible because they are working. In the Scandinavian countries of Denmark, Norway, and Sweden, this \”participation tax rate\” is about double what it is in the United States. Here\’s Kleven:
The contrast is even more striking when considering the so-called “participation tax rate,” which is the effective average tax rate on labor force participation when accounting for the distortions due to income taxes, payroll taxes, consumption taxes, and means-tested transfers. This tax rate is around 80 percent in the Scandinavian countries, implying that an average worker entering employment will be able to increase consumption by only 20 percent of earned income due to the combined effect of higher taxes and lower transfers. By contrast, the average worker in the United States gets to keep 63 percent of earnings when accounting for the full effect of the tax and welfare system.
A standard American-style prediction would be that countries where gains from working are so low should see a lower level of participation in the workforce. That prediction does not hold true in cross-country data among high-income countries. Here\’s a figure from Kleven\’s paper: Notice that the Scandinavian countries have among the highest participation tax rates, but also have among the highest employment rates in the age 20-59 population, both overall and for females. Correlation isn\’t causation, as the econometricians love to chant, but it\’s still intriguing that overall pattern across countries is that a higher participation tax rates is correlated with a higher employment rate–the opposite of what one might expect.
What explains this pattern? Kleven argues that just looking at the tax rate isn\’t enough, because it also matters what the tax revenue is spent on. For example, the Scandinavian countries spend a lot of money on universal programs for preschool, child care, and elderly care. Kleven calls these \”participation subsidies,\” because they make it easier for people to work–especially for people who otherwise would need to find a way to cover or pay for child care or elder care. The programs are universal, which means that their value expressed as a share of income earned means much more to a low- or middle-income family than to a high-income family. Here\’s Kleven:
“[P]articipation subsidies” [are] due to public spending on the provision of child care, preschool, and elderly care. Even though these programs are typically universal (and therefore available to both working and nonworking families), they effectively subsidize labor supply by lowering the prices of goods that are complementary to working. That is, working families have greater need for support in taking care of their young children or elderly parents, and so demand more of those services other things equal. From this perspective, the cross-country correlations shown in Figure 5 have the expected sign; higher public support for preschool, child care, and elder care is positively associated with the rate of employment. Moreover, the Scandinavian countries are strong outliers as they spend more on such participation subsidies (about 6 percent of aggregate labor income) than any other country.\”
Any direct comparisons between the United States (population of 316 million) and the Scandinavian countries of Denmark (6 million), Norway, (5 million) and Sweden (10 million) is of course fraught with peril. Their history, politics, economies, and institutions differ in so many ways. You can\’t just pick up can\’t just pick up long-standing policies or institutions in one country, plunk them down in another country, and expect them to work the same way.
That said, Kleven basic conceptual point seems sound. Provision of good-quality preschool, child care and elder care does make it easier for all families, but especially low-income families with children, to participate in the labor market. In these three Scandinavian countries, the power of these programs to encourage labor force participation seems to overcome the work disincentives that arise in financing and operating them. This argument has nothing to do with whether preschool and child care programs might help some children to perform better in school–although if they do work in that way, it would strengthen the case for taking this approach.
So here is a hard but intriguing hypothetical question: The U.S. government spends something like $60 billion per year on the Earned Income Tax Credit, which is a refundable tax credit providing income mainly to low-income families with children, and almost as much on the refundable child tax credit. Would low-income families with children be better off, and more attached to the workforce, if a sizeable portion of the 100 billion-plus spent for these tax credits–and aimed at providing financial incentives to work–was instead directed toward universal programs of preschool, child care, and elder care?
The reason that the United Nations voted last year to designate November 19 as World Toilet Day is because the first World Toilet Summit began on that day in 2001, and that day is also the start of the World Toilet Organization. Out of the global population of 7 billion, about 1 billion people defecate in the open, with about 600 million of those people living in India. According to the World Health Organization and UNICEF, there are 19 countries in the world where more than half the rural population still practices open defecation.
Especially in areas with relatively dense populations, this practice has health consequences. It\’s difficult to separate the effects of lacking toilets from other issues of unsafe water supplies. But the World Toilet Organization says that the lack of toilets causes an average of 1,000 child deaths each day due to diarrhea, and other estimates refer to stunted growth and prevalence of infections like typhoid from fecal-borne diseases. There is also an issue of violence against women and girls perpetrated when they lack a private and secure place to defecate.
Talking about toilets can feel uncomfortable, and the discussion can quickly lose its policy focus. In the bulk of this post, I have manfully avoided referring to Sir Thomas Crapper, who greatly improved and popularized the flush toilet in the 19th century. I have not discussed the We Can\’t Wait promotions or the dancing turds ads in India. I have sidestepped whether toilet policy should be pursued through a bottom-up or top-down approach. In this case, as in so many others, the easy giggle can too often be a way of minimizing a real public health challenge.
For the long-run future of the U.S. economy, and indeed, the global economy, no subject is more important than the likely course of productivity growth. The McKinsey Quarterly celebrated 50 years of publication with its September 2014 issue. That issue includes a short interview with Robert Solow, with Martin Neil Baily and Frank Comes as interlocutors.
Solow, of course, won the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel (commonly known as the \”Nobel Prize in economics\”) in 1987 \”for his contributions to the theory of economic growth.\” In a nutshell, Solow demonstrated that the accumulation of capital and of labor was not a sufficient explanation for the process of economic growth, and that a broad element of \”technological progress\” also needed to play a role. If that concept seems obvious now, it is Solow\’s pathbreaking work from more than half-century ago that helped to make it obvious. Solow is also one of the most gifted expositors in economics. Here are a few of his comments from the interview:
Solow on economic forecasting:
\”As an ordinary macroeconomist, I have avoided forecasting as if it were a foul disease—as indeed it is. It’s very damaging to the tissues. So I don’t think one can say too much.\”
Solow on capital intensity in the service sector:
\”I don’t think we even have a very clear idea about the relative capital intensity within the service sector or between the service sector and goods-producing sector. I remember I was once writing something in which I was describing the service sector as being of relatively low capital intensity. And then I stopped and remembered that the following day I had an appointment with my dentist and that my dentist’s office was as capital intensive a 500 square feet as I had ever seen in my life.\”
Solow on the importance of global competition to productivity growth:
What came as something completely new to me was that if you looked at the same industry across countries, there were almost always dramatic differences in either labor productivity or total factor productivity. To my surprise, it turned out that most of the time, certainly more often than not, the difference in productivity—in the auto industry or the steel industry or the residential-construction industry in the US and in countries in Europe—was not only substantial but couldn’t seriously be explained by differences in access to technology.
We also found that the productivity differences could not be traced to differences in access to investment capital. The French automobile industry, much to my surprise, turned out to be more capital intensive than the American automobile industry. So it was not that either. The MGI [McKinsey Global Institute] studies instead traced these differences in productivity to organizational differences, to the way tasks were allocated within a firm or a division—essentially, to failures in managerial decisions. I was, of course, instantly suspicious of this. I figured to myself, “What do you expect a bunch of management consultants to find but differences in management capacities? That’s in their genes. That’s not in my genes.” But MGI made a very convincing case for this. And I came to believe that it was right. …
[T]here was another surprise, for which there was partly anecdotal, partly statistical evidence. If you asked why there were differences that could be erased or diminished by better management, the answer was that it took the spur of sharp competition to induce managers to do what they were in principle capable of doing. So the idea that everybody is everywhere and always maximizing profits turned out to be not quite right. MGI made a very good case that what was lacking in these trailing industries in other countries—or in the US, in cases where the US trailed—was enough exposure to competition from whoever in the world had the best practice. And this, of course, can apply within a country. We know that in any industry, there is a whole distribution of productivity levels across firms and even, sometimes, across establishments within a firm. And much of that must be due to the absence of any spur to do more. So an interesting conclusion to me was that international trade serves a purpose beyond exploiting comparative advantage. It exposes high-level managers in various countries to a little fright. And fright turns out to be an important motivation. … [I]t goes beyond that, even. Competing as part of the world economy is an important way of gaining access to scale. If you’re a Belgian company or even a French company, it may be that best practice requires a scale of production larger than the French domestic market will provide for French producers. So it’s important for such companies to have access to the international market.
The Patient Protection and Affordable Care Act was passed in 2010. The exchanges aimed at increasing the number of people without health insurance started operating, albeit in a halting and often dysfunctional way, in October 2013. So what progress has been made in reducing the number of Americans who how lack health insurance? I checked four sources: the Current Population Survey from the U.S. Census Bureau, the American Community Survey also from the Census, the National Health Interview Survey from the Centers for Disease Control, and the Gallup Poll.
Before listing the results, I\’ll just point out that my expectations were not high. No one who took more than a minute to consider the actual legislation ever expected that it would provide universal health insurance. As one example, here\’s a White House announcement in September 2010 predicting that the act would reduce the number of Americans without health insurance from about 50 million to about 18 million. In May 2013, the Congressional Budget Office estimated last hat the implementation of the Affordable Care Act would reduce the number of Americans without health insurance from 55 million in 2013 to 31 million in 2016, with most of that drop coming from people signing up for the new insurance \”exchanges\” and some coming from an expansion of Medicaid. But the CBO also estimated that by 2023, there would still be 31 million uninsured. These estimates were noticed: for exmaple, here\’s a June 2013 a Washington Post story about those 31 million. So after all the tumult and the shouting over the Affordable Care Act, both, during and after its passage, its White House supporters optimistically expected it to solve about 60% of the problem of Americans lacking health insurance, and nonpartisan sources like the CBO thought it might address about 40% of that problem.
What\’s the evidence from the four sources I checked? Well, the first thing one discovers is that it\’s only three sources. In one of those acts of bad timing that verges on statistical malpractice, the Census Bureau decided that 2013, right on the verge of the biggest change in the U.S. healthcare system in the 1960s, was an appropriate time to change its survey questions about whether people have health insurance in such a way that the answers from past surveys were not comparable to the results for 2013. For details, see the September 2014 report on \”Health Insurance in the United States: 2013\” from the U.S. Census Bureau. They write:
The CPS [Current Population Survey] is the longest-running survey conducted by the Census Bureau. The key purpose of the CPS ASEC [Annual Social and Economic Supplement] is to provide timely and detailed estimates of economic well-being, of which health insurance coverage is an important part. . . .Traditionally, this report has included detailed comparisons of year-to-year changes in health insurance coverage using the CPS ASEC. However, due to the redesign of the health insurance section of the CPS ASEC, its estimates of health insurance coverage are not directly comparable to estimates from prior years of the survey. … The redesigned CPS ASEC is based on over a decade of research, including two national field tests as well as cognitive testing.
If government statisticians had a fan club, I\’d be sitting in the front row beaming. But no matter the reasons (the plans for the change were set years earlier, limits on funding precluded doing two surveys, and so on), the timing of this change was a blunder. The survey reports: \”In 2013, the percentage of people without health insurance coverage for the entire calendar year was 13.4 percent, or 42.0 million.\” Whether this was higher or lower than previous years cannot be answered using this data.
However, the Census points to another sukrvey, the American Community Survey, which has data on the share of those without health insurance from 2008 to 2013. The results isn\’t especially informative: a small rise in the share of uninsured during the Great Recession, and a small fall since then.
The National Health Interview Survey from the Centers for Disease Control has been done since 1957. Here are the preliminary results released in mid-September with regard to health insurance for the survey carried out in March 2014. Again, the pattern shows a rise in the share of uninsured during the Great Recession, and a fall since then, without a big break from trend.
For those looking for evidence that the share of uninsured is falling, the strongest evidence comes from the most recent Gallup survey data. This data goes through the third quarter of 2014, and shows a substantial fall in the share of those without health insurance since fall 2013–when the efforts to start covering more of the uninsured kicked into gear.
The Gallup data is the only one of these sources that tracks up through third quarter of 2014. The drop in the last year almost surely means something. But it\’s concerning that the patterns of the Gallup data do not match the overall pattern of the systematic and well-established government surveys. In comparing surveys, the level of a certain answer may be higher or lower, depending on exactly how a question is worded, but the change in the level should still show similar timing. The government surveys show the share of uninsured peaking in 2010, while the Gallup data shows a more-or-less steady rise in the share of uninsured, with a couple of puzzling downward bumps, until third quarter 2013. It may be that people\’s awareness of health insurance, or whether they think they have it, or their concern over having health insurance, may be fluctuating in ways that have a bigger effect on the Gallup poll results than on the other surveys.
At this stage, there are bundles of the news stories about how many people signed up for the health insurance exchanges or for the expansion of Medicaid, and how the share of people getting health insurance through their jobs is falling. But what\’s the overall effect? The national surveys don\’t show show that the 2010 health care legislation has had much effect at all on the share of those without health insurance, at least not through the first quarter of 2014. Next March and June, when the National Health Interview Survey preliminary data for the later part of 2014 are published, we should start to have a better picture–and a sense of whether the drop shown in the Gallup data holds up in better-established surveys. In September 2015, we\’ll have data for 2014 from the Current Population Survey, too.
But it should be crystal-clear at this point that if you believed the Patient Protection and Affordable Care Act would provide anything remotely close to universal health insurance coverage, you were badly misled. So far, the CBO-style predictions that the legislation was headed for addressing less than half of this problem seem on the mark–and perhaps even a bit too optimistic.