Winter 2012 Journal of Economic Perspectives

The Winter 2012 issue of my own Journal of Economic Perspectives is now up on the web. Courtesy of the American Economic Association, this issue and indeed back issues of the journal all the way back through 1994 are freely available on the web. I\’ll be blogging about some of these papers over the next week or so, but for now, here\’s the table of contents and and abstract for each article.

Symposium: Energy Challenges

\”Is There an Energy Efficiency Gap?\” by  Hunt Allcott and Michael Greenstone

Many analysts of the energy industry have long believed that energy efficiency offers an enormous \”win-win\” opportunity: through aggressive energy conservation policies, we can both save money and reduce negative externalities associated with energy use. In 1979, Daniel Yergin and the Harvard Business School Energy Project estimated that the United States could consume 30 or 40 percent less energy without reducing welfare. The central economic question around energy efficiency is whether there are investment inefficiencies that a policy could correct. First, we examine choices made by consumers and firms, testing whether they fail to make investments in energy efficiency that would increase utility or profits. Second, we focus on specific types of investment inefficiencies, testing for evidence consistent with each. Three key conclusions arise: First, the evidence presented in the long literature on the subject frequently does not meet modern standards for credibility. Second, when one tallies up the available empirical evidence from different contexts, it is difficult to substantiate claims of a pervasive Energy Efficiency Gap. Third, it is crucial that policies be targeted. Welfare gains will be larger from a policy that preferentially affects the decisions of those consumers subject to investment inefficiencies.
Full-Text Access 

\”Creating a Smarter U.S. Electricity Grid,\” by Paul L. Joskow

This paper focuses on efforts to build what policymakers call the \”smart grid,\” involving 1) improved remote monitoring and automatic and remote control of facilities in high-voltage electricity transmission networks; 2) improved remote monitoring, two-way communications, and automatic and remote control of local distribution networks; and 3) installation of \”smart\” metering and associated communications capabilities on customer premises so that customers can receive real-time price information and/or take advantage of opportunities to contract with their retail supplier to manage the consumer\’s electricity demands remotely in response to wholesale prices and network congestion. I examine the opportunities, challenges, and uncertainties associated with investments in \”smart grid\” technologies. I discuss some basic electricity supply and demand, pricing, and physical network attributes that are critical for understanding the opportunities and challenges associated with expanding deployment of smart grid technologies. Then I cover issues associated with the deployment of these technologies at the high voltage transmission, local distribution, and end-use metering levels.
Full-Text Access 

\”Prospects for Nuclear Power,\” by Lucas W. Davis

Nuclear power has long been controversial because of concerns about nuclear accidents, storage of spent fuel, and how the spread of nuclear power might raise risks of the proliferation of nuclear weapons. These concerns are real and important. However, emphasizing these concerns implicitly suggests that unless these issues are taken into account, nuclear power would otherwise be cost effective compared to other forms of electricity generation. This implication is unwarranted. Throughout the history of nuclear power, a key challenge has been the high cost of construction for nuclear plants. Construction costs are high enough that it becomes difficult to make an economic argument for nuclear even before incorporating these external factors. This is particularly true in countries like the United States where recent technological advances have dramatically increased the availability of natural gas. The chairman of one of the largest U.S. nuclear companies recently said that his company would not break ground on a new nuclear plant until the price of natural gas was more than double today\’s level and carbon emissions cost $25 per ton. This comment summarizes the current economics of nuclear power pretty well. Yes, there is a certain confluence of factors that could make nuclear power a viable economic option. Otherwise, a nuclear power renaissance seems unlikely.
Full-Text Access 

\”The Private and Public Economics of Renewable Electricity Generation,\” by Severin Borenstein

Generating electricity from renewable sources is more expensive than conventional approaches but reduces pollution externalities. Analyzing the tradeoff is much more challenging than often presumed because the value of electricity is extremely dependent on the time and location at which it is produced, which is not very controllable with some renewables, such as wind and solar. Likewise, the pollution benefits from renewable generation depend on what type of generation it displaces, which also depends on time and location. Without incorporating these factors, cost-benefit analyses of alternatives are likely to be misleading. Other common arguments for subsidizing renewable power—green jobs, energy security, and driving down fossil energy prices—are unlikely to substantially alter the analysis. The role of intellectual property spillovers is a strong argument for subsidizing energy science research, but less persuasive as an enhancement to the value of installing current renewable energy technologies.
Full-Text Access 

\”Reducing Petroleum Consumption from Transportation,\” by Christopher R. Knittel

The United States consumes more petroleum-based liquid fuel per capita than any other OECD high-income country—30 percent more than the second-highest country (Canada) and 40 percent more than the third-highest (Luxembourg). The transportation sector accounts for 70 percent of U.S. oil consumption and 30 percent of U.S. greenhouse gas emissions. Taking the externalities associated with high U.S. gasoline consumption as largely given, I focus on understanding the policy tools that seek to reduce this consumption. I consider four main channels through which reductions in U.S. oil consumption might take place: 1) increased fuel economy of existing vehicles, 2) increased use of non-petroleum-based, low-carbon fuels, 3) alternatives to the internal combustion engine, and 4) reduced vehicle miles traveled. I then discuss how these policies for reducing petroleum consumption compare with the standard economics prescription for using a Pigouvian tax to deal with externalities. Taking into account that energy taxes are a political hot button in the United States, and also considering some evidence that consumers may not \”correctly\” value fuel economy, I offer some thoughts about the margins on which policy aimed at reducing petroleum consumption might usefully proceed.
Full-Text Access

\”How Will Energy Demand Develop in the Developing World?\” by Catherine Wolfram, Orie Shelef and Paul Gertler

Over the next 25 to 30 years, nearly all of the growth in energy demand, fossil fuel use, associated local pollution, and greenhouse gas emissions is forecast to come from the developing world. This paper argues that the world\’s poor and near-poor will play a major role in driving medium-run growth in energy consumption. As the world economy expands and poor households\’ incomes rise, they are likely to get connected to the electricity grid, gain access to good roads, and purchase energy-using assets like appliances and vehicles for the first time. We argue that the current forecasts for energy demand in the developing world may be understated because they do not accurately capture growth in demand along the extensive margin, as low-income households buy their first durable appliances and vehicles. Within a country, the adoption of energy-using assets typically follows an S-shaped pattern: among the very poor, we see little increase in the number of households owning refrigerators, vehicles, air conditioners, and other assets as incomes go up; above a first threshold income level, we see rapid increases of ownership with income; and above a second threshold, increases in ownership level off. A large share of the world\’s population has yet to go through the first transition, suggesting there is likely to be a large increase in the demand for energy in the coming years.
Full-Text Access

Symposium: Higher Education

\”The For-Profit Postsecondary School Sector: Nimble Critters or Agile Predators?\” by David J. Deming, Claudia Goldin and Lawrence F. Katz

Private for-profit institutions have been the fastest-growing part of the U.S. higher education sector. For-profit enrollment increased from 0.2 percent to 9.1 percent of total enrollment in degree-granting schools from 1970 to 2009, and for-profit institutions account for the majority of enrollments in non-degree-granting postsecondary schools. We describe the schools, students, and programs in the for-profit higher education sector, its phenomenal recent growth, and its relationship to the federal and state governments. Using the 2004 to 2009 Beginning Postsecondary Students (BPS) longitudinal survey, we assess outcomes of a recent cohort of first-time undergraduates who attended for-profits relative to comparable students who attended community colleges or other public or private non-profit institutions. We find that relative to these other institutions, for-profits educate a larger fraction of minority, disadvantaged, and older students, and they have greater success at retaining students in their first year and getting them to complete short programs at the certificate and AA levels. But we also find that for-profit students end up with higher unemployment and \”idleness\” rates and lower earnings six years after entering programs than do comparable students from other schools and that, not surprisingly, they have far greater default rates on their loans.
Full-Text Access 

\”Student Loans: Do College Students Borrow Too Much–Or Not Enough?\” by Christopher Avery and Sarah Turner

Total student loan debt rose to over $800 billion in June 2010, overtaking total credit card debt outstanding for the first time. By the time this article sees print, the continually updated Student Loan Debt Clock will show an accumulated total of roughly $1 trillion. Borrowing to finance educational expenditures has been increasing—more than quadrupling in real dollars since the early 1990s. The sheer magnitude of these figures has led to increased public commentary on the level of student borrowing. We move the discussion of student loans away from anecdote by establishing a framework for considering the use of student loans in the optimal financing of collegiate investments. From a financial perspective, enrolling in college is equivalent to signing up for a lottery with large expected gains—indeed, the figures presented here suggest that college is, on average, a better investment today than it was a generation ago—but it is also a lottery with significant probabilities of both larger positive, and smaller or even negative, returns. We look to available—albeit limited—evidence to assess which types of students are likely to be borrowing too much or too little.
Full-Text Access 

\”American Higher Education in Transition,\” by Ronald G. Ehrenberg

American higher education is in transition along many dimensions: tuition levels, faculty composition, expenditure allocation, pedagogy, technology, and more. During the last three decades, at private four-year academic institutions, undergraduate tuition levels increased each year on average by 3.5 percent more than the rate of inflation; the comparable increases for public four-year and public two-year institutions were 5.1 percent and 3.5 percent, respectively. Academic institutions have also changed how they allocate their resources. The percentage of faculty nationwide that is full-time has declined, and the vast majority of part-time faculty members do not have Ph.D.s. The share of institutional expenditures going to faculty salaries and benefits in both public and private institutions has fallen relative to the share going to nonfaculty uses like student services, academic support, and institutional support. There are changing modes of instruction, together with different uses of technology, as institutions reexamine the prevailing \”lecture/discussion\” format. A number of schools are charging differential tuition across students. This paper discusses these various changes, how they are distributed across higher education sectors, and their implications. I conclude with some speculations about the future of American education.
Full-Text Access 

Articles

\”Compensation for State and Local Government Workers,\” Maury Gittleman and Brooks Pierce

Are state and local government workers overcompensated? In this paper, we step back from the highly charged rhetoric and address this question with the two primary data sources for looking at compensation of state and local government workers: the Current Population Survey conducted by the Bureau of the Census for the Bureau of Labor Statistics, and the Employer Costs for Employee Compensation microdata collected as part of the National Compensation Survey of the Bureau of Labor Statistics. In both data sets, the workers being hired in the public sector have higher skill levels than those in the private sector, so the challenge is to compare across sectors in a way that adjusts suitably for this difference. After controlling for skill differences and incorporating employer costs for benefits packages, we find that, on average, public sector workers in state government have compensation costs 3-10 percent greater than those for workers in the private sector, while in local government the gap is 10-19 percent. We caution that this finding is somewhat dependent on the chosen sample and specification, that averages can obscure broader differences in distributions, and that a host of worker and job attributes are not available to us in these data. Nonetheless, the data suggest that public sector workers, especially local government ones, on average, receive greater remuneration than observably similar private sector workers. Overturning this result would require, we think, strong arguments for particular model specifications, or different data.
Full-Text Access 

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access

Are the New Auto Fuel Economy Standards For Real?

Politicians are predisposed to like a technology standard, like the Corporate Average Fuel Economy (CAFE) standards for automobile miles-per-gallon, as a way of holding down petroleum use. After all, it sounds a lot better to voters than enacting a gasoline tax or a carbon tax! Pass a law that better-mileage cars will be phased in over the next decade or two, and politicians can boast of their great achievement –sidestepping the fact that promised aren\’t achievements and rules are made to be changed.

Thus, when I heard about the plans for a dramatic increase in CAFE standards, I was skeptical. In the most recent issue of my own Journal of Economic Perspectives, Christopher R. Knittel discusses various aspects of  \”Reducing Petroleum Consumption from Transportation.\” As Knittel writes: \”A new CAFE standard in place for 2011 seeks to increase average fuel economy to roughly 34.1 miles per gallon by 2016. The Environmental Protection Agency and Department of Transportation are currently in the rule-making process for model years 2017 and beyond, with President Obama and 13 automakers agreeing to a standard of 54.5 MPG by 2025.\” Knittel provides evidence to back up my skepticism about the past use of CAFE standards, but he also argues that those future standards–unbelieveable as they may at first appear–are technologically achievable.

Back in 1975, against a backdrop of a dramatic rise in oil prices and concern over dependence on imported oil, the U.S. enacted the Corporate Average Fuel Economy (CAFE) law, requiring that over the average of their cars sold by each company, the average had to start at 18 miles-per-gallon, and then rise to 27 mpg by 1985. Higher gasoline prices provided a strong inducement for people to buy these more fuel-efficient cars, but when gasoline prices dropped in the mid-1980s, the CAFE standards stagnated. Here\’s a figure from Knittel showing how CAFE standards flattened out after 1985–and also showing the planned increase to take place.



The lack of any increase in the CAFE standards was only part of the story. Knittel explains: \”[T]wo features of the original CAFE standards reduced their effect. First, sport-utility vehicles were treated as light trucks, and thus could meet a lower miles-per-gallon standard than cars. Perhaps not coincidentally, in 1979 light trucks comprised less than 10 percent of the new vehicle fleet, but this share rose steadily
and peaked in 2004 at 60 percent. Second, vehicles with a gross vehicle weight of over 8,500 pounds, which includes many large pickup trucks and sports-utility vehicles, were exempt from CAFE standards.\”

Taking these factors together, actual fuel economy for the U.S. fleet of cars hasn\’t been rising much, although it has edged up in the last few years with higher gasoline prices. My own interpretation is that the CAFE standards effectively became nonbinding–that is, they weren\’t pushing anyone to buy a different car than they otherwise would have purchased, and they weren\’t adding to fuel economy. Here\’s the data:

So, is it technologically possible meet the future increase in miles-per-gallon standards? Knittel argues \”yes.\” He points out: \”By world standards, these [currently existing] miles-per-gallon standards are not aggressive. After accounting for differences in the testing procedures, the World Bank estimated that the European Union standard was roughly 17 MPG more stringent in 2010 than the U.S. standard …\”

Moreover, Knittel has carried out a series of studies looking at technological progress in cars, and the tradeoffs between weight, engine power, and fuel efficiency. He finds: \”On average, a vehicle with a given weight and engine power level has a fuel economy that is 1.75 percent higher than a vehicle with the same weight and horsepower level from the previous year. …In the medium run, automakers can adjust vehicle attributes by trading off weight and horsepower for increased fuel economy. In Knittel (2011), I find that reducing weight by 1 percent increases fuel economy by roughly 0.4 percent, while reducing horsepower and torque by 1 percent increases fuel economy by roughly 0.3 percent.\”

By Knittel\’s calculation, getting from the new-car average fuel economy standard of 29 mpg in 2010 to 34.1 mpg in 2016 is do-able. If technological progress continues to improve mileage by 1.75% per year, and ways are found to reduce weight and engine power by about 6%, the standard for 2016 is achieveable.

But what about that planned standard of 54.5 mpg by 2025? Knittel explains that the number is somewhat inflated: \”Taken literally, it would require fundamental changes to rates of technological progress and/or the size and power of vehicles. The 2025 number is a bit misleading. In the law, the 54.5 miles-per-gallon standard is based on a calculation from the Environmental Protection Agency based on carbon dioxide tailpipe emissions. It also includes credits for many technologies including plug-in hybrids, electric and hydrogen vehicles, improved air conditioning effifi ciency, and others. On an apples-to-apples basis, Roland (2011) cites some industry followers that claim that the actual new fleet fuel economy standard in 2025 is more like 40 miles per gallon. Achieving 40 miles per gallon by 2025 is certainly possible. At a rate of technological progress of 1.75 percent per year, 40 miles per gallon requires additional reductions in weight and engine power of less than 7 percent.\”

But although the planned mileage standards do appear–to my surprise–technologically feasible, it remains to be seen whether they are politically feasible, and also whether they are even a sensible public policy idea.

On the political side, the U.S. political system found a way for most of the last three decades to have fuel economy standards on the books as a matter of law and public relations–but to have standards with very little bite. Let\’s see whether the fuel economy standards planned for the future actually cause some real changes in the U.S. auto fleet, or whether they are quickly riddled with exceptions.

But at a deeper level, it\’s not even clear that fuel economy standards are a good policy idea. Knittel explains: \”At a basic level, it focuses on the wrong thing—fuel economy instead of total fuel consumption. CAFE only targets new vehicles and leads to subsidies for some vehicles. Finally, CAFE pushes consumers into more-fuel-efficient vehicles without changing the price of fuel, leading to more miles traveled. The empirical size of this last effect, known as “rebound,” is a matter of ongoing research,
but to the extent that rebound occurs, it necessarily leads to greater congestion, accidents, and criteria pollutant emissions relative to the status quo.\” A considerable body of economic research suggests that if your policy goal is to reduce petroleum consumption, a gasoline tax or a carbon tax accomplishes the goal at a far lower social cost than fuel economy standards–although for politicians the explicitness of that cost seems to make it a nonstarter.

For more discussion of this topic, I recommend \”Automobile Fuel Economy Standards: Impacts, Efficiency, and Alternatives,\” by Soren T. Anderson, Ian W. H. Parry, James M. Sallee, and Carolyn Fischer, in the Winter 2011 issue of the Review of Environmental Economics and Policy.  The publisher has made article freely available here.
 

The Big Decline in Housing Segregation

Edward Glaeser and Jacob Vigdor document and discuss \”The End of the Segregated Century: Racial Separation in America\’s Neighborhoods, 1890-2010,\” in Civic Report #66 written for the Manhattan Institute for Policy Research.  
Here\’s a headline graph. Take the two most common measures of residential segregation, the \”dissimilarity index\”  and the \”isolation index\” (both explained further in a moment). Apply them to the 10 largest American cities using Census data The pattern that emerges is a large increase in residential segregation  from about  1910 to 1950, segregation remaining at that  high level from about 1950 to 1970, and then a sharp decline in residential segregation from 1970 up through 2010.

Glaeser and Vigdor summarize the pattern in this way: \”Segregation has declined steadily from its mid-century peak, with significant drops in every decade since 1970. As of 2010, the separation of African-Americans from individuals of other races stood at its lowest level in nearly a century. Fifty years ago, nearly half the black population lived in what might be termed a “ghetto” neighborhood, with an African-American share above 80 percent. Today, that proportion has fallen to 20 percent.\”
How is residential segregation measured? 
\”Residential segregation can be measured in a variety of ways. The most common method is to form an index that summarizes the level of segregation in a metropolitan area on a scale from zero, where every neighborhood is just as diverse as the entire region, to 100, where individuals of different races never share neighborhoods. Indices differ according to their coding of intermediate situations … Some indices require more detailed geographical data than others, with the most sophisticated using census information collected on a block-by-block basis. 
\”This report focuses on two measures—the dissimilarity index and the isolation index—both of which have a long history in social-scientific writing on segregation. The two measures together adequately summarize segregation, being highly correlated with more sophisticated indices, while being simple enough to calculate that even data from the late nineteenth century are sufficiently rich to permit their computation.\”
 What is intuitive interpretation  of a dissimilarity index? 
\”The dissimilarity index measures the extent to which two groups are found in equal proportion in all neighborhoods. It can be interpreted as the proportion of individuals of either group that would have to change neighborhoods in order to achieve perfect integration. It is the most commonly used segregation measure, first introduced into the sociology literature shortly after World War II.\”
 What is the intuitive interpretation of an isolation index? 
\”Dissimilarity is not a perfect measure. Consider the following scenario. There are two equal-size neighborhoods in a city: one is 100 percent white; and the other is 98 percent white and 2 percent black. According to the dissimilarity index, this city is fairly segregated, since about half of the black residents would need to move in order to achieve perfect integration. In an important sense, though, the black residents are not isolated—after all, they live in a neighborhood that is 98 percent white.
 
The isolation index is designed to distinguish this sort of scenario from one where neighborhoods have dramatically different racial character. It measures the tendency for members of one group to live in neighborhoods where their share of the population is above the citywide average. In this hypothetical example, black residents live in a neighborhood that is 2 percent black, which is just 1 percentage point higher than what would be expected under perfect integration. The isolation index would therefore be on the order of 1 percent, rather than 50 percent.\”

This paper is more about documenting the patterns than about inquiring into underlying causes, but Glaeser and Vigdor do offer an overview of causes.

In their telling, the dramatic rise in residential segregation from 1910 to about 1950 arose because it was a time of huge migration from rural areas to cities in the United States, and when African-Americans participated in this migration, they were met with a combination of white hostility and legal restrictions that pushed them into highly segregated neighborhoods. The decline in residential segregation over the last 40 years is a combination of factors: reductions in the legal and social barriers that enforced residential segregation; outflow of the African-American population from highly segregated neighborhoods; and in some parts of the country, inflow of Hispanic and Asian immigrants to neighborhoods that had been segregated before.

High levels of residential segregation have been an enormous and legitimate social concern, so the decline in those levels is worth noting and welcoming. But as Glaeser and Vigdor point out, the good news comes with a mournful undertone for those who remember some of the high hopes for housing desegregation back in the 1960s:

\”The 1960s were the heyday of racial segregation. During those years, segregation seemed a likely cause of many of the troubles afflicting African-Americans. Segregation was so enormous, and so unfair, that it seemed to create a separate and unequal experience for African-Americans everywhere. During those years, the fight against housing segregation seemed to offer the possibility that once the races mixed more readily, all would be well.

Forty years later, we know that this dream was a myth. There is every reason to relish the fact that there is more freedom in housing today than 50 years ago and to applaud those who fought to create that change. Yet we now know that eliminating segregation was not a magic bullet. Residential segregation has declined pervasively, as ghettos depopulate and the nation’s population center shifts toward the less segregated Sun Belt. At the same time, there has been only limited progress in closing achievement and employment gaps between blacks and whites. … While the decline in segregation remains good news, far too many Americans still lack the opportunity to achieve meaningful success.\”

Don\’t Know Nothing About Military Strategy

I don\’t know anything about issues like the appropriate number of soldiers in the armed forces, or which weapons systems are really needed, or where forces should be based around the world. But I do know something about federal budget numbers. As the political debate unfolds over appropriate levels of defense spending in the years ahead, here are some historical and international perspectives.

Measured as a share of GDP, U.S. defense spending is down substantially from the figures of 10% or more that often occurred in the 1950s and 1960s, and down from the 6% reached during the Reagan defense build-up of the 1980s. Although defense spending as a share of GDP has nudged up since September 11, 2001, it was 4.7% of GDP in 2011.

Defense spending also doesn\’t dominate the federal budget as it once did. Back in January 1961, in President Dwight Eisenhower\’s farewell address, he warned of the dangers of the \”military-industrial complex.\” But at that time, defense spending was still almost 10% of GDP and more than half of total federal spending. Indeed, defense spending was as much as 70% of all federal spending back in the early 1950s (and higher than that at the peak of WWII). But for the last two decades, defense spending has dropped to about 20% of all federal spending.

While U.S. defense spending at relatively low levels, historically speaking, both relative to GDP and relative to total federal spending, it remains high relative to spending by other countries. Here\’s a table showing that U.S. defense spending is more than 40% of the world total, and that U.S. defense spending comfortably exceeds the sum of defense spending by the next 10 largest spenders.

The U.S. economy is the largest in the world, and it also spends one of the greatest shares of that economy on defense. By my count, only eight countries in the world (for which SIPRI has data) spend a larger share of their GDP on defense than does the U.S.:  Saudi Arabia, 11.2%; Chad, 6.2%; Georgia, 5.6%; Iraq, 5.4%; Israel, 6.3%; Jordan, 6.1%; Oman, 9.7%; and UAE, 7.3%.

Of course, spending isn\’t the only variable that matters in national defense: strategy, diplomacy, ideology, economic ties, even personal cross-border ties can affect the likelihood and extent of instability. Defense spending can be quite good at projecting certain kinds of power, but not especially useful at blocking a biological or nuclear weapon that fits in a panel truck or even a large suitcase. That said, these sorts of numbers cut both directions in the debate over levels of defense spending. Those who favor reductions in defense spending over time might take note of the fact that we haven\’t been living in Eisenhower\’s world for some time, and U.S. defense spending has a smaller share of the economy and of federal spending than the historical norm. Those who favor higher defense spending might take note of the fact that the U.S. is far and away the largest defense spending nation now–and that many of the other largest spenders are our allies.

For both sides, I\’m always interested not just in hearing argument about \”more\” or \”less,\” but about what is enough. If you prefer cutting defense, how low would you go before you would say \”enough\”? If you prefer increasing defense, how high would you go before you would say \”enough\”? If someone can\’t explain their answer to that question, I suspect that underneath their show of confident certainty, they don\’t really know any more about weighing the costs and benefits of military spending than I do.

Thanks for Danlu Hu for putting together the U.S. defense spending figures over time from the Historical Tables of the President\’s Budget for 2013.

Parental Leave in Other Countries

Note:  On February 22, 2012, two  comments from readers added at the end of the post.
 ____________

The United States has far fewer laws that offer protections to parents with family responsibilities than do most other countries. Alison Earle, Zitha Mokomane, and Jody Heymann discuss \”International Perspectives on Work-Family Policies: Lessons from the World’s Most Competitive Economies,\” in the Fall 2011 issue of the Future of Children. Indeed, the issue is devoted to issues of \”Work and Family.\” From the \”Summary\” of the Earle, Mokomane, and Heymann article:

\”The United States does not guarantee families a wide range of supportive workplace policies such as paid maternity and paternity leave or paid leave to care for sick children. … Using indicators of competitiveness gathered by the World Economic Forum, the authors identify fifteen countries, including the United States, that have been among the top twenty countries in competitiveness rankings for at least eight of ten years. … They find that every one of these countries, except the United States, guarantees some form of paid leave for new mothers as well as annual leave. And all but Switzerland and the United States guarantee paid leave for new fathers. … The majority of these countries provide paid leave for new mothers, paid leave for new fathers, paid leave to care for children’s health care needs, breast-feeding breaks, paid vacation leave, and a weekly day of rest. Of these, the United States guarantees only breast-feeding breaks (part of the recently passed health care legislation).\”

Here are some illustrative tables showing policies in these countries. The U.S. does not have paid parental leaves. All of the 14 comparison countries in the table offer such paid leaves for mothers, lasting at least 18 weeks and in some cases more than a year, and replacing at least 25% of salary and up to 100% of salary. Thirteen of the 14 comparison countries offer paid parental leave, ranging at the low end from a few days or a couple of weeks up to more than a year, and replacing from 25% of pay up to 100%.

The next figure looks at  leave policies for attending to children\’s health care. U.S. law does have provisions for breast-feeding breaks, as do 7 of the 14 comparison countries. All of the countries, including the U.S., have provisions for leave to care for children\’s health needs–but unlike in the U.S., that leave is paid in 11 of the 14 comparison countries.

The final table looks at policies on paid annual leave, a weekly day of rest, or night work. These are not policies specifically aimed at parents, but at all workers. The United States has no law guaranteeing paid annual leave, while all 14 of the comparison countries do–often requiring four or five weeks of such leave. Thirteen of the 14 comparison countries also have legal guarantees of at least one day of rest each week.

Earle, Mokomane, and Heymann discuss evidence that paternal leave policies of various sorts are associated with improved infant health and lower infant mortality–in part by enabling more breastfeeding, in part by enabling parents to be more involved in preventive and other health care for their children.  They argue that parental leaves help to foster \”children’s social, psychological, behavioral, emotional, and cognitive functioning.\”

I\’m enough of an American at heart that some of these international comparisons open my eyes pretty widely. In Germany, you can have more than two years of paid leave for both mothers and fathers? In Austria, Denmark, Sweden, and the United Kingdom, there is a national legal guarantee of five weeks of paid vacation? I\’m enough of an economist to wonder about the incentives that such provisions give employers to avoid hiring women of child-bearing age, or to slot young people into into certain jobs where they can be replaced without too much fuss if they disappear for a couple of years. With workers who make middle-income salaries or higher, guarantees of paid vacation can be considered part of their overall compensation, but for low-paid, low-skill workers, such rules may discourage hiring them at all. I\’d want to know more about how such policies are designed and enforced before enacting them in the United States.

But it is just  a fact that the U.S. labor market is at the far end of the international spectrum of high-income countries in not offering these kinds of policies. Even with my born-in-America, economist-trained skepticism, I find myself thinking that experimenting with such policies at the firm level, the state level, and even the national level is worth a closer look. After all, the days when American society could count on nearly all mothers to exit the (paid) workforce and spend their time in childcare and homemaking are long behind us.

___________________

David Paul writes:

\”I happen to be a student from Germany and just had this topic in school, so I wanted to clarify the laws a bit. You wrote: \”I\’m enough of an American at heart that some of these international comparisons open my eyes pretty widely. In Germany, you can have more than two years of paid leave for both mothers and fathers?\”

\”It\’s actually not quite that extreme. 14 weeks vacation for the mother at 100% pay are standard (6 before birth and 8 after – the 8 after are mandatory – paid by the statutory health insurance). After that, either the mother or the father can stay at home for a year at ~66% previous pay (this is paid by the government). They can switch in that time, for instance mother half the time and father half the time – in that case it goes up to 14 months. Or alternatively one parent can stay at home for 2 years, but then at only ~33% of previous wages. This doesn\’t change the basic point on the difference between the US laws and those of the other countries of course.\”  David adds: \”The laws can be found in the MuSchG and the BEEG.\”

Michael Cain writes:

\”As I am inclined to say, the US, particularly over the last 30 years, has made policy on the basis of making the country be a good place to be a capitalist, while most of the other developed economies have made policy on the basis of making their countries be a good place to be a worker.  Personally, I tend to believe that the latter works out better for social outcomes, although it\’s possible to go overboard, of course.  Business owners, once forced into the corner where they must have and keep employees, and must pay a living wage, and must contribute as necessary to provide benefits such as universal health care access, are *very* good at finding ways to increase the efficiency of the employees so that the owners still make money: they demand better education, they invest in better equipment, they listen to their workers (eg, Japanese auto companies make their engineers design for easy construction as well as other attributes).  The US has let business owners take the easy way out: drop health insurance, relocate the factory to a different region or foreign country, and so forth.\”

Six Adults and One Child: The Coming Baby Bust

Wendell Cox and Emma Chen tell the story of \”Six Adults and One Child\” in their chapter in
\”The New World Order,\” edited and largely written by Joel Kotkin for the Legatum Institute. Much of the report is about thinking of the world as dominated by three spheres: the Indian sphere of influence,
the Sinosphere and the Anglosphere. But Cox and Chen are focused on the demographics of the coming baby bust. Here\’s their story from China (footnotes omitted):

\”On a Saturday afternoon at The Bund, Xiao Ming (or “Little Ming”) clings tightly onto the hands of his paternal grandparents. His maternal grandparents walk slightly ahead, clearing a path for him in the midst of all the buzz and traffic. Retracing the imprints of their imaginary footsteps, Xiao Ming takes his first tentative steps as a three year old in town for the first time. Slightly behind him, the watchful eyes and
ready hands of his own parents spur him on. 


Xiao Ming’s personal parade epitomises the popular quip in Shanghai and across China, that “it takes six adults to raise one child”. These six individuals form the unspoken support structure of China’s youth: While the OECD points out that 80% of students in Shanghai attend after-school tutoring, it fails to capture the “soft factors” behind Shanghai’s top rankings in the Program for International Student Assessment (PISA). Popular Chinese dramas such as 房奴 (House Slave) depict this in meticulous detail: Grandparents spend hours brewing “brain tonics” for their grandchildren, and parents pack austere work lunchboxes to save up for their child’s tuition fees. …

Here’s the big issue down the historical road: Thirty years from now, how will Xiao Ming handle six elderly parents and grandparents, all by himself? Xiao Ming’s impending dilemma is not unique to China. Overall what author Phil Longman calls a “gray tsunami” will be sweeping the planet, with more than half of all of population growth coming from people over 60 while only six percent will be from people under 30. The battle of the future – including in the developing world – will be, in large part, how to maintain large enough workforces required for the economic growth needed to, among other things, take care of and feed the elderly. …

Already the global fertility rate, including the developing countries, has dropped in half to an estimated 2.5 today. Close to half the world’s population lives, notes demographer Nicholas Eberstadt, in countries with below replacement rate birth-rates. The world, he suggests, is experiencing a “fertility implosion”.\”

Here\’s a figure to illustrate. The \”dependency ratios\” here refer to the number of either elder people over 65 or children age 14 and under compared to every 100 members of the working-age population between ages 15-64. Thus:

  • The dark blue line shows that in regions of the world with more developed economies, there were more than 40 children for every 100 working-age people back in 1950, but that has now fallen to about 30. 
  • The light blue line shows that in those same regions of the world with  more developed economies, there were about 10 over-65 elderly for every 100 workers back in 1950, but that ratio has now reached 30 and is headed for 40 by mid-century. 
  • The orange line shows that less developed regions (leaving aside the poorest countries) had about 75 children for every 100 workers back in 1965, but that ratio has now fallen to about 40 and is headed for 30 by mid-century. 
  • The scarlet line shows that in the less developed regions (again leaving aside the poorest countries) have only about 10 elderly for every 100 workers today–not much of an increase in the last half-century–but that this ratio is about to rise sharply to about 30 by mid-century and 40 by the end of the century.

The question of how the workers of tomorrow will support the elderly of tomorrow is a global issue. Ultimately, the only answer I can see to the policy question is an expectation that most people will retire later than their current expectations.  But I also suspect that living in a society where the elderly outnumber the children will reshape all sorts of institutions of everyday life. Schools and playgrounds will become more scarce; libraries and senior centers will proliferate. Holidays like Halloween are already moving from being child-centered to being adult-centered. 

Many people will find that they are part of a tall, slender family \”tree.\” Instead of experiencing three generations of relatives–children, parents, grandparents–more and more people will be living in families where there are four or even five generations living at the same time. However, with smaller family sized these generations will not include large numbers of people. Thus, you can imagine a typical family \”tree\” of the future as consisting of two grandparents approaching age 60, who had one child, who in turn married and had one child, but who are also feeling responsible for two of their own parents who are about 80 years of age, and also also for one of their grandparents who has just turned 100. Families with fewer children will spend less of their adult lives in raising children. There will also be ever-greater numbers of adults who never become parents. For example, will we rely more on close family members, because each generational tie feels more precious? Or of necessity, will we all need to rely more on non-relatives? We do not have mental templates for how we organize or family ties and responsibilities in these tall slender family trees.

The Remarkable Consistency of Long-Run U.S. Economic Growth

President Obama\’s proposed budget for FY 2013 came out yesterday, and I did what I usually do: Skip the details of the budget proposals, and in particular skip the projections for years off in the future , which are under every president a mix of political calculations and feigned optimism about what will be enacted into legislation. Instead, head for the volumes labelled \”Analytical Perspectives\” and \”Historical Tables.\”

For example, the \”Analytical Perspectives\” volume of the proposed FY 2013 budget has some discussion of whether the U.S. economy will eventually bounce back all the way from the Great Recession to its earlier trendline of growth, or whether the Great Recession will cause a drop of the economy to a lower growth path. A figure illustrates that from 1890 to the present, the U.S. economy has followed a very consistent growth path.

The vertical axis of the graph shows per capita GDP measured by it natural logarithm. For those eyeballing the graph, the natural log of $40,000 is 10.6–roughly the present level of per capita GDP. The natural log of $5,000 is 8.5–roughly the level of real per capita GDP back in 1890. A straight line on a log graph means that the variable is growing at a constant percentage rate: in this case, at about 1.8% per year.

Here\’s how the budget discusses the question of whether the economy will eventually return to trend:

\”Recent recoveries have been somewhat weaker than average, but the last two expansions were preceded by mild recessions with relatively little pent-up demand when conditions improved. Because of the depth of the recent recession, there is much more room for a rebound in spending and production than was true either in 1991 or 2001. On the other hand, lingering effects from the credit crisis and other special factors have limited the pace of the recovery until now. Thus, the Administration is forecasting a slower than normal recovery, but one that eventually restores GDP to near the level of potential that would have prevailed in the absence of a downturn. Some international economic organizations have argued that a financial recession permanently scars an economy, and this view is also shared by some American forecasters. On that view, there is no reason to expect a full recovery to the previous trend of real GDP. The statistical evidence for permanent scarring comes mostly from the experiences of developing countries and its relevance to the current situation in the United States is debatable. Historically, economic growth in the United States economy has shown considerable stability over time as displayed in Chart 2-7. Since the late 19th century, following every recession, the economy has returned to the long-term trend in per capita real GDP. This was true even following the only previous recession in which the United States experienced a disastrous financial crisis – 1929-1933 …\”

Of course, past performance is no guarantee of future results, as the investment adviser are quick to remind you. Still, those who believe that the Great Recession will move the U.S. economy to a permanently lower growth path are making a prediction that flies in the face of the last 120 years of U.S. economic experience.

Labor\’s Declining Share of Total Income

Margaret Jacobson and Filippo Occhino of the Cleveland Fed offer a short overview of what is \”Behind the Decline in Labor\’s Share of Income\” in the February 2012 issue of Economic Trends.

Back in the day, when I was first getting familiar with these numbers, the standard summary of the data was that labor income was about two-thirds of total output in the U.S. economy, although the share fluctuated over the business cycle. As Jacobson and Occhino write: \”Over the cycle, the labor income share tends to increase during the early part of recessions, because businesses lower labor compensation less than output, and compensation per hour continues to increase even as productivity slows down. Then, after reaching a peak sometime during the recession, the labor income share tends to decrease during the rest of the recession and the early part of the recovery, as output picks up at a faster pace than labor compensation, and compensation per hour grows at a slower pace than productivity. Only later in the recovery, as the labor market tightens, does labor compensation catch up with output and productivity, and the labor income share recovers.\”

But this basic fact–labor as two-thirds of economic output–no longer seems to be holding true. It\’s not just that the ratio is at historic lows in the post-World War II period, as shown in the figure; after all, given the depth and length of the Great Recession, and the sluggishness and sustained high unemployment of the tepid recovery, it\’s no surprise that the labor share of income would be low about now. But the data seems to show an overall pattern of a dropping labor share of income even before the Great Recession started, and reaching back to the 1980s or the late 1970s.

When output is rising faster than labor income, it necessarily follows that labor compensation is rising more slowly that output per hour. Here\’s the figure. Note that up until about the early 1980s, productivity as measured by output per hour rose more-or-less in step with compensation per hour (although it appears that even then, output/hour was trying to creep ahead). But the gap has expanded since then, and was expanding even before the Great Recession.

What explains this change? Jacobson and Occhino list the possibilities: \”Economists have identified three long-term factors that can explain why the wage-productivity gap has widened and the share of income accruing to labor has declined. The first is the decrease in the bargaining power of labor, due to changing labor market policies and a decline of the more unionized sectors. Another factor is increased globalization and trade openness, with the resulting migration of relatively more labor-intensive sectors from advanced economies to emerging economies. As a consequence, the sectors remaining in the advanced economies are relatively less labor-intensive, and the average share of labor income is lower. The third factor is technological change connected with improvements in information and communication technologies, which has raised the marginal productivity and return to capital relative to labor.\”

This list seems basically right to me, although my reading of the evidence is that the items are listed in inverse order of importance. But I also find it useful to think about the fall in labor income in reverse, as the rise of capital income. For example, the Dow Jones index roughly tripled from 1980 to 1990 (rising from 900 to 2700), and then more than tripled from 1990 to 2000 (rising from 2700 to 10,800). While the Dow was basically flat over the decade from 2000 to 2010, large non-labor income was generated during the earlier part of the decade by housing prices. While many of us participate in gains in the stock market or the housing market in some ways, the bulk of those gains do tend to flow to those with higher income levels.

But looking ahead, another rapid tripling of stock market, as in the 1980s and again in the 1990s, seems unlikely, as does another housing price bubble. Meanwhile, the U.S. labor market is showing some feeble signs of resurgence, and may well strengthen over the next couple of years. Over the next 3-5 years, I\’d expect the labor income share of output to rise–although I don\’t expect labor income to re-attain the old level of two-thirds of national output in the near- or the middle-term.

Medicaid in Transition

\”Medicare is for the elderly and Medicaid is for the poor.\” I\’ve heard it a million times, and probably said it myself, but of course the distinction isn\’t quite correct. Medicaid was from the start focused on the \”deserving\” poor, which at the time was low-income families with children, along with the poor who were also disabled or elderly, but it didn\’t cover single adults below the poverty line.

The vast majority of Medicaid\’s spending is not on low-income families of able-bodied adults with children, but instead on those with low incomes who are also blind, disabled, and elderly. The 2010 Actuarial Report on the Financial Outlook for Medicaid reports that in 2009, Medicaid covered 50 million people on average at any given point in time, and had outlays of $380 billion. Thus, the elderly represented about one-tenth of all Medicaid enrollees, but about one-fifth of all Medicaid expenditures. As the Medicaid actuaries report:

\”Per enrollee spending for health services was $6,890 in 2009. Per capita spending for non-disabled children ($2,848) and adults ($4,123) was much lower than that for aged ($15,678) and disabled beneficiaries ($16,563) … While blind or disabled enrollees and aged enrollees are the smallest enrollment groups in Medicaid, they are projected to account for the majority of spending. … [F]or FY 2009, benefit spending was estimated to be $148.4 billion for blind or disabled enrollees and $74.6 billion for aged enrollees. Combined, spending on these two groups constituted 66 percent of Medicaid expenditures … Medicaid spending on non-disabled children represented about 20 percent of total Medicaid benefit expenditures, and spending on non-disabled non-aged adults accounted for about 14 percent.\”

Medicaid is a huge part of the U.S. health care system. The actuaries report that in 2008, \”Medicaid spending for that year represented 14.7 percent of total NHE [national health expenditures]. Private health insurance was the largest source of spending on health care in 2008, accounting for 33.5 percent of total NHE, while Medicare paid for 20.1 percent.\” If you\’re adding up at home, yes, Medicaid plus Medicare pays a higher share of the nation\’s health care bills than does private health insurance. At any given time, Medicaid also covers more people than Medicare: in 2009, Medicaid was covering an average of 50 million people at any given time compared to 46 million for Medicare.

However, Medicaid is about to start into two major changes that will drive up its costs. One set of changes is related to the Affordable Care Act of 2009, and the other to the aging of the U.S. population.

One of the main ways in which the Affordable Care Act plans to expand health care insurance to those who not presently have it is through an expansion of who is eligible for Medicaid. The actuaries write:  \”The Affordable Care Act will have a substantial effect on Medicaid trends over the next 10 years and beyond. In terms of the magnitude of changes to the program’s projected expenditures and enrollment, it is likely that the Affordable Care Act will be the largest legislative change to Medicaid since the program’s inception.\” The Affordable Care Act is projected to add about 20 million enrollees to Medicaid by 2019, with roughly three-quarters of them being adults. Medicaid is now funded about two-thirds by the federal government and one-third by state governments, but for the first few years (with no promises for after that time!), the federal government  will pick up essentially 100% of the cost of these additional enrollees.

The issues of \”Medicaid and the elderly\”  are explored by  Mariacristina  De Nardi, Eric French, John Bailey Jones, and Angshuman Gooptu in the most recent issue of Economic Perspectives, published by the Federal Reserve Bank of Chicago. In particular, Medicaid has become a major provider of long-term care. The authors write:  \”The principal public provider of long-term care is Medicaid, a means-tested program for the impoverished. Medicaid now assists 70 percent of nursing home residents and helps the elderly poor pay for other medical services as well. … Although Medicaid is available only to “poor” households, middle-income households with high medical expenses usually qualify for assistance also. Given the ongoing growth in medical expenditures, Medicaid coverage in old age is thus becoming as much of a
program for the middle class as for the poor …\”

Here\’s a figure from the actuaries showing Medicaid\’s share of total U.S. spending in certain markets. Again, notice that Medicaid spending is a lot less about doctor visits for low-income children and a lot more about nursing home care and home health care:

As the actuaries put it: \”Medicaid has a major responsibility for providing long-term care because the program covers some aged and many disabled persons, who tend to be the most frequent and most costly users of such care, and because private health insurance and Medicare often furnish only limited coverage for these benefits, particularly for nursing homes. Many people who pay for nursing home care privately become impoverished due to the expense; as a result, these people eventually become eligible for Medicaid.\”

At least so far, payments for long-term care are not yet a major driver of Medicaid expenses. The front edge of the baby boomer generation is just hitting its retirement years, but the really substantial demand for nursing home care kicks in more at age 85 than at age 65. The Medicaid actuaries write: \”As the oldest members of the baby boom generation begin to reach age 65, both the number of aged enrollees in Medicaid and eventually the rate of long-term care spending growth are projected to increase. While the baby boom generation is not estimated to have a major effect on long-term care spending during FY 2010 through FY 2019, the increase in the number of people over age 85 in the next 10 years is expected to do so.\”

For a more detailed discussion of Long-Term Care Insurance in the U.S. and its interaction with Medicaid, see also my post of November 22, 2011.  For a broader discussion of Long-Term Care in International Perspective, see my post of August 9, 2011. 

It\’s worth remembering that unlike Medicare, Medicaid has no dedicated tax or revenue stream. Unlike private health insurance, Medicaid isn\’t financed by premiums. Instead, the ongoing rise in Medicaid costs will be in direct competition with other government programs funded from general tax revenues.

U.S. and World Perspective: Immigration Policy #5

This is the fifth of five posts on immigration policy. For the first post and an overview, see here.

American public discourse on immigration is so hot-blooded that it often sounds as if the fate of the republic–or at least the economy–is at stake. But oddly enough, the economic issues related to U.S. immigration are in all likelihood dwarfed in size by the global gains.

While I believe that that an expansion of immigration offers gains to those already in this country, the gains seem likely to be small. For example, in the Cato Journal symposium, Raúl Hinojosa-Ojeda uses a computable general equilibrium approach to estimate that a substantial rise in immigration could increase U.S. GDP by 0.8%–and even this modest gain is well above other estimates I\’ve seen. Similarly, while I argued in the third post in this series that the effects of immigration on government budgets were probably positive, they arguments over gains and losses to different levels of government, and the gap between the two, are measure in billions or at most tens of billions of dollars–which isn\’t much in the context of an annual federal budget now approaching $4 trillion.

The main gains from immigration, perhaps not surprisingly, go to the immigrants themselves, who may easily increase their incomes by a multiple of four or five times when moving from a low-income country to a high-income country. Large increases in immigration thus offer a possibility of a massive increase in world GDP Giovanni Peri sums up some of the evidence in \”Immigration, Labor Markets, and Productivity.\”  

\”[I]f one looks at several recent reports and studies on international migrations by economists and research institutions, their main emphasis is on the large size of global gains obtainable by increasing, even by a small measure, the mobility of people. A study by the World Bank (2005) estimated that an increase in international
migration equal to 3 percent of the labor force of developed countries would produce gains (to be shared globally) of $356 billion. Pritchett (2006) argues that the gains from increasing international mobility, even by a little, are much larger than those that can be obtained by fully liberalizing international trade, estimated in 2005 to be $104 billion. In the more extreme case of a full opening of more wealthy, Organization for International Cooperation and Development (OECD) countries to workers from the rest of the world, Klein and Ventura (2007) calculate a potential massive increase in the world GDP on the order of 150 percent over 50 years. For economists, in short, international migration has the formidable ability of increasing total world income and productivity, generating huge global economic opportunities. The reason is very simple. By allowing people to move to countries where they can produce four to five times more value per hour of work on average than in their country of origin, migrations allow the deployment of world human resources in a massively more efficient way …\”

My own Journal of Economic Perspectives had some articles on emigration in the Summer 2011 issue that emphasized the possibility of substantial Gains from Emigration, as I posted last August 22. (Current and back issue of my journal going back to 1994 are freely available to all, courtesy of the American Economic Association.)

In short, the strongest case for immigration is not that it conveys huge benefits to the U.S. economy–it probably doesn\’t. But it certainly conveys huge benefits to those who migrate. I believe that national boundaries matter, and I\’m an American at heart. When it comes to public policy, I place a lower value on what happens to non-Americans than I do on what happens to Americans. Thus, I do believe that any costs to Americans resulting from immigration are a legitimate policy issue. But placing a lower value on what happens to non-Americans doesn\’t mean placing no value on what happens to them. The truly enormous gains received by migrants predispose me toward policies that would allow more immigration to the U.S.–and to seek alternatives for dealing with potential costs.