McWages Around the World

It\’s hard to compare wages in different countries, because the details of the job differ. A typical job in a manufacturing facility, for example, is a rather different experience in China, Germany, Michigan, or Brazil. But for about a decade, Orley Ashenfelter has been looking at one set of jobs that are extremely similar across countries–jobs at McDonald\’s restaurants. He discussed this research and a broader agenda of \”Comparing Real Wage Rates\” across countries in his Presidential Address last January to the American Economic Association meetings in Chicago. The talk has now been published in the April 2012 issue of the American Economic Review, which will be available to many academics through their library subscription. But the talk is also freely available to the public here as Working Paper #570 from the Princeton\’s Industrial Relations Section. 

How do we know that food preparation jobs at McDonald\’s are similar? Here\’s Ashenfelter:  

\”There is a reason that McDonald’s products are similar.  These restaurants operate with a standardized protocol for employee work. Food ingredients are delivered to the restaurants and stored in coolers and freezers. The ingredients and food preparation system are specifically designed to differ very little from place to place. Although the skills necessary to handle contracts with suppliers or to manage and select employees may differ among restaurants, the basic food preparation work in each restaurant is highly standardized. Operations are monitored using the 600-page Operations and Training Manual, which covers every aspect of food preparation and includes precise time tables as well as color photographs. … As a result of the standardization of both the product and the workers’ tasks, international comparisons of wages of McDonald’s crew members are free of interpretation problems stemming from differences in skill content or compensating wage differentials.\”

Ashenfelter has built up McWages data from about 60 countries. Here is a table of comparisons. The first column shows the hourly wage of a crew member at McDonald\’s, expressed in U.S. dollars (using the then-current exchange rate). The second column is the wage relative to the U.S. wage level, where the U.S. wage is 1.00. The third column is the price of a Big Mac in that country, again converted to U.S. dollars. And the fourth column is the McWage divided by the price of a Big Mac–as a rough-and-ready way of measuring the buying power of the wage.

Ashenfelter sums up this data, and I will put the last line in boldface type: \”There are three obvious, dramatic conclusions that it is easy to draw from the comparison of wage rates in Table 3.  First, the developed countries, including the US, Canada, Japan, and Western Europe have quite similar wage rates, whether measured in dollars or in BMPH.   In these countries a worker earned between 2 and 3 Big Macs per hour of work, and with the exception of Western Europe with its highly regulated wage structure, earned around $7 an hour.  A second conclusion is that the vast majority of workers, including those in India, China, Latin America, and the Middle East earned about 10% as much as the workers in developed countries, although the BMPH comparison increases this ratio to about 15%, as would any purchasing-power-price adjustment.   Finally, workers in Russia, Eastern Europe, and South Africa face wage rates about 25 to 35% of those in the developed countries, although again the BMPH comparison increases this ratio somewhat.  In sum, the data in Table 3 provide transparent and credible evidence that workers doing the same tasks and producing the same output using identical technologies are paid vastly different wage rates.\”

In passing, it\’s interesting to note that McWage jobs pay so much more in western Europe than in the U.S., Canada and Japan. But let\’s pursue the highlighted theme: How can the same job with the same output and the same technology pay more in one country than in another? One part of the answer, of course, is that you can\’t hire someone in India or Sough Africa to make you a burger and fries for lunch. But at a deeper level, the higher McWages in high-income countries is not about the skill or human capital in those countries, but instead reflects that the entire economy is operating at a higher productivity level.
 

Here is an illustrative figure. The horizontal axis shows the \”McWage ratio\”: that is, the U.S. McWage is equal to 1.00, and the McWages in all other countries are expressed in proportion. The vertical axis is \”Hourly Output Ratio.\” This is measuring output per hour worked in the economy, again with the U.S. level set equal to 1.00, and the output per hour worked in all other countries expressed in proportion. The straight line at a 45-degree angle plots the points in which a country with, say, a McWage at 20% of the U.S. level also has output per hour worked at 20% of the U.S. level, a country with a McWage at 50% of the U.S. level also has output per hour worked at 50% of the U.S. level, and so on. 

The key lesson of the figure is that the differences in McWages across countries line up with the overall productivity differences across countries. The main exceptions, in the upper right-hand part of the diagram, are countries where the McWage is above U.S. levels but output-per-hour for the economy as a whole is below U.S. levels: New Zealand, Japan, Italy, Germany. These are countries with minimum wage laws that push up the McWage. 

Ashenfelter emphasizes in his remarks how real wages can be used to assess and compare the living standards of workers. I would add that these measures show that the most important factor determining wages for most of us is not our personal skills and human capital, or our effort and initiative, but whether we are using those skills and human capital in the context of a a high-productivity or a low-productivity economy.

Ignorance as Asset and Strategic Outcome

The February 2012 issue of Economy and Society is a special issue focused on a theme of \”Strategic unknowns: towards a sociology of ignorance.\” The opening essay with this title, by Linsey McGoey, is freely available here. Many academics will have access to the rest of the issue through their library subscriptions.

The central theme of the issue is that ambiguity and ignorance are not just the absence of knowledge, waiting to be illuminated by facts and disclosure. Instead, ambiguity and ignorance are in certain situations the preferred strategic outcome. McGoey writes (citations omitted): \”Ignorance is knowledge: that is the starting premise and impetus of the following collection of papers. Together, they contribute to a small but growing literature which explores how different forms of strategic ignorance and social unknowing help both to maintain and to disrupt social and political orders, allowing both governors and the governed to deny awareness of things it is not in their interest to acknowledge …\”

Many of the examples are sociological in nature, but others are based in economic and policy situations. For example, consider a number of situations that have to do with a policy response to risky situations: the risk that smoking causes cancer, the risk that growing carbon emissions will lead to climate change, the risk of future terrorist actions (and whether invading certain countries will increase or reduce those risks), and the risk of fluctuations in fluctuations in financial markets. McGoey writes:

\”Within the game of predicting risk, one often wins regardless of whether risks materialize or not. If a predicted threat fails to emerge, the identification of the threat is credited for deterring it. If a predicted threat does emerge, authorities are commended for their foresight. If an unpredicted threat appears, authorities have a right to call for more resources to combat their own earlier ignorance. ‘The beauty of a futuristic vision, of course, is that it does not have to be true’, writes Kaushik Sunder Rajan (2006, p. 121) in a study of the way expectations surrounding new biotechnologies help to create funding opportunities and foster faith in the technology regardless of whether expectations prove true or not. In fact, expectations are often particularly fruitful when they fail to materialize, for more hope and hype are needed to remedy thwarted expectations. Attention to the resilience of risks the way that claims of risk often feed on their own inaccuracy helps to highlight the value of conditionality for those in political authority.\”

One of the essays in the volume, by William Davies and Linsey McGoey, applies this framework to thinking about the recent financial crisis. They point out that many financial professionals begin from the starting point that risk and uncertainty are huge problems, and thus one needs their high-priced help to address these issues. In this way, claims of ambiguity and ignorance are an asset for the finance industry. If the investment go well, then the financial professionals claim credit for steering successfully through these oceans of uncertainty. But when investments and decision go badly, as in the Great Recession, they claim absolution for their decisions by reiterating just how ambiguous and unclear the financial markets are, and how no one could have really known what was going to  happen. And somehow, this just proves that their expertise is more needed than ever. They write: \”We examine the usefulness of the failure or refusal to act on warning signs, regardless of the motivations why. We look at the double value of ignorance: the ways that social silence surrounding unsettling facts enabled profitable activities to endure despite unease about their implications and, second, the way earlier silences are then harnessed and mobilized to absolve earlier inaction.

In another essay, Jacqueline Best applies these ideas in the context of the World Bank\’s \”good governance agenda\” and the IMF\’s \”conditionality policy.\” She writes: \”Both policies have been ambiguously defined throughout their history, enabling them to be interpreted and applied in different ways. This ambiguity has facilitated the gradual expansion of the scope of the policies. … Actors at both the IMF and the World Bank were not only aware of the central role of ambiguity in their policies, but were also ambivalent about it.  … Finally, although staff and directors at both institutions may have been ambivalent about the role of ambiguity in these policies, they ultimately ensured that ambiguities persisted and even proliferated.\”  Best also notes that ambiguity is hard to control, and can lead to unintended consequences. 

In yet another essay, Steve Rayner write about \”Uncomfortable knowledge: the social construction of ignorance in science and environmental policy discourses.\” He writes: \”My interest is therefore in how information is kept out rather than kept in and my approach is to treat ignorance as a necessary social achievement rather than a a simple background failure to acquire, store, and retrieve knowledge.\” Rayner writes: \”An example of clumsy or incompletely theorized arrangements is the implicit consensus on US nuclear energy policy that emerged in the 1980s and persisted for the best part of three decades. Despite the complete absence of any Act of Congress or Presidential Order, it was implicitly accepted by government, industry, and environmental NGOs that the US would continue to support nuclear R&D while operating an informal moratorium on the addition of new nuclear generating capacity. All of the parties agreed to this, but for various reasons, all had a stake in not acknowledging the existence of an settlement.\”

One might add that many environmental laws and other regulatory policies are chock-full of ambiguous language, which gives regulators the ability to interpret these rules as tough-minded while also giving potential offenders the possibility of saying that they had no way of knowing the rules would be applied in this way. Rayner also offers a nicely provocative claim about tendencies to dismiss and deny in the context of warnings about climate change: \”It seems odd that climate science has been held to a `platinum standard\’ of precision and reliability that goes well beyond anything that is normally required to make significant decisions in either the public or private sectors. Governments have recently gone to war based on much lower-quality intelligence than that which science offers us about climate change. Similar firms embark on product launches and mergers on the bases of much lower-quality information.\”  

Academic research of course often uses of a feigned ignorance to generate a greater persuasive effect. The title of a research paper is often written in the form of a question, and the theory and data are often presented as if the author was a Solomonic figure encountering this material for the first time, guided only by a disinterested pursuit of Truth (with a capital T). The implications for reputation of past past work, or its political implications, are shunted off to the side. Research would have less persuasive effect if it started off by saying, \”I\’ve been hammering on this same conclusion for 25 years now, and I find pretty much exactly the same result every time I look at any data set from any time or place–and by the way, this conclusion also supports the political outcomes I prefer.\”

One of many implications of thinking about ignorance and ambiguity as assets and as strategic behavior is that it highlights that many economic actors and policy-makers have strong incentives to promote both their own ignorance, and more broadly, the idea that ambiguity makes true knowledge impossible. Ignorance can be a power grab, and the basis for a job, and a get-out-of-jail-free card.

Why Does the U.S. Spend More on Health Care than Other Countries?

Everyone knows that the U.S. spends far more on health care than other countries, but do you know how much more? In 2009, the U.S. spent 17.4% of GDP on health care (using OECD data). The closest contenders are Netherlands (12% of GDP), France (11.8%), Germany (11.6%), Denmark (11.5%), and Canada (11.4%). The U.S. has higher per capita GDP than these countries, so the gap in absolute spending is even higher. In 2009, the U.S. spent $7,960 per person on health care, and the closest contenders were Switzerland ($5,144 per person) and Netherlands ($4,914).

When I hear people argue that the U.S. should follow the path of the UK health care system, I sometimes find myself thinking: \”You mean that U.S. health care spending per person should be slashed by 56%, from $7,960 per person to $3,487 per person? Really?\”

What accounts for these differences in health care spending across countries? David Squires assembles some of the evidence in \”Explaining High Health Care Spending in the United States: An International Comparison of Supply, Utilization, Prices and Quality,\” a May 2012 \”issue brief\” written for the Commonwealth Fund. I ran across it here at Larry Willmore\’s Thought du Jour blog.   I\’ll also contrast and compare it with a paper by David M. Cutler and Dan P. Ly, \”The (Paper)Work of Medicine: Understanding International Medical Costs,\” which appeared in the Spring 2011 issue of my own Journal of Economic Perspectives. For readability, footnotes and references to exhibits are omitted from the quotations below.

 Higher U.S. health care spending is not because Americans on average are notably less healthy.
As Squires sums up: \”U.S. has smaller elderly population and fewer smokers, but higher obesity rates. …  Higher rates of obesity undoubtedly inflate health spending; one study estimates the medical costs attributable to obesity in the U.S. reached almost 10 percent of all medical spending in 2008. However, the younger population and lower rates of smoking likely have an opposite effect, reducing U.S. health care spending relative to most other countries.\”

Higher U.S. health care spending is not because the U.S. has more doctors or hospital beds.
\”There were 2.4 physicians per 1,000 population in the U.S. in 2009, fewer than in all other study countries except Japan. Likewise, patients had fewer doctor consultations in the U.S. (3.9 per capita)
than in any other country except Sweden. Hospital supply and use showed similar trends, with the U.S. having fewer hospital beds (2.7 per 1,000 population), shorter lengths of stay for acute care (5.4
days), and fewer discharges (131 per 1,000 population) than the OECD median …\”

Prices for brand-name drugs are much higher in the U.S., but generics are cheaper.
Squires writes: \”[P]rices for the 30 most-commonly prescribed drugs are one-third higher than in Canada and Germany, and more than double the prices in Australia, France, Netherlands, New Zealand, and the U.K. Notably, prices for generic drugs are lower in the U.S. than in these other countries, whereas prices for brand-name drugs are much higher.\”

Cutler and Ly confirm this general pattern, but also put the potential cost savings in perspective: \”However, because pharmaceuticals are only about 10 percent of U.S. healthcare spending, the overall amount that could be saved by moving to U.S. government monopsony purchasing of drugs
is relatively small—perhaps 20 to 30 percent of pharmaceutical spending, or 2 to 3 percent of total medical costs. These cost savings also would have to be weighed against the possibility of reduced incentives for investment and innovation in the pharmaceutical industry. The dollar amount of excess pharmaceutical payments in the United States is approximately the total amount of pharmaceutical company research and development (R&D).\”

U.S. doctors are paid more, but they also live in an economy with a more unequal distribution of wages.
Squires writes: \”U.S. primary care physicians generally receive higher fees for office visits and orthopedic physicians receive higher fees for hip replacements than in Australia, Canada, France, Germany, and the U.K. … U.S. primary care doctors ($186,582) and particularly orthopedic doctors ($442,450) earned greater income than in the other five countries …\”

Cutler and Ly confirm: \”The average U.S. specialist physician earns $230,000 annually—
78 percent above the average in other countries … . Primary care physicians earn less (they earn $161,000 on average), but the same percentage more than their peers in other countries. … If we reduced all physician incomes in the United States to match the international ratio of physicians’ incomes to per capita GDP, U.S. healthcare spending would be lower by roughly 2 percent.However, these seemingly high salaries for U.S. physicians appear less high in the context of the broader income distribution.\” Cutler and Ly go on to point out that high-compensation workers in the U.S. economy earn more than their international counterparts in just about every profession–after all, that\’s part of what it means to say that the U.S. has a less equal distribution of income.

Some medical device technologies like scanning are more widely used in the U.S; some like hip replacements are not.
\”In 2009, the U.S., along with Germany, performed the most knee replacements (213 per 100,000
population) among the study countries, and 75 percent more knee replacements than the OECD median (122 per 100,000 population). However, the U.S. performed barely more hip replacements than the OECD median, and significantly less than several of the other study countries …\”

\”Relative to the other study countries where data were available, there were an above-average
number of magnetic resonance imaging (MRI) machines (25.9 per million population), computed
tomography (CT) scanners (34.3 per million), positron emission tomography (PET) scanners (3.1 per million), and mammographs (40.2 per million) in the U.S. in 2009. Utilization of imaging was also highest in the U.S., with 91.2 MRI exams and 227.9 CT exams per 1,000 population. MRI and CT devices were most prevalent in Japan, though no utilization data were available for that country. … [T]he U.S. commercial average diagnostic imaging fees ($1,080 for an MRI and $510 for a CT exam) are far higher than what is charged in almost all of the other countries …\”

The U.S. does a relatively poor job of managing chronic disease.
Squires writes: \”[Consider] rates of potentially preventable mortality due to asthma (for those between ages 5 and 39) and lower-extremity amputations due to diabetes per 100,000 population. On both measures, the U.S. had among the highest rates, suggesting a failure to effectively manage these chronic conditions that make up an increasing share of the disease burden.\”

Many chronic diseases share the general property that if they are well-managed every single day, with a combination of drugs, lifestyle, and certain kinds of monitoring of physical conditions, it is possible to reduce the need for enormously costly episodes of hospitalization. As the Centers for Disease Control puts it: \”Chronic diseases—such as heart disease, cancer, and diabetes—are the leading causes of death and disability in the United States. Chronic diseases account for 70% of all deaths in the U.S., which is 1.7 million each year. These diseases also cause major limitations in daily living for almost 1 out of 10 Americans ….\”

Prices for hospital stays are substantially higher in the U.S.
Squires points out: \”[H]ospital stays in the U.S. were far more expensive than in the other study countries, exceeding $18,000 per discharge compared with less than $10,000 in Sweden, Australia, New Zealand, France, and Germany.\” And remember, these higher costs per hospital stay happen even though the stays themselves are on average shorter in the U.S.

The tougher question is to what extent these higher costs per hospital stay reflect a larger quantity of concentrated and effective high-tech care being provided, and to what extent its just a matter of higher prices. The evidence here is mixed. It does appear that for some conditions, Americans receive more hospital care. Cutler and Ly write:  Americans also receive more-intensive care than do Canadians. While the population-adjusted hospital admission rates are about the same in the two countries, additional procedures are provided to those with the same diagnosis in the United States. For example, people with a heart attack in the United States are twice as likely to receive bypass surgery or angioplasty than are similar people in Canada.\” When it comes to cancer survival rates, Squires points out: \”The U.S. had the highest survival rates among the study countries for breast cancer (89%) and, along with Norway, for colorectal cancer (65%).\”

On the other side, the more aggressive use of heart surgery in the U.S. as compared to Canada doesn\’t seem to mean better health outcomes; instead, it reflects the existence of more heart-surgery facilities. Cutler and Ly: \”  On one side, the greater use of intensive therapies after a heart attack in the United States compared to Canada is not associated with improved mortality, though morbidity is more diffifult to determine. Similarly, a recent study concluded that there was no systematic difference in outcomes in favor of the United States over Canada; if anything, Canadians had better outcomes in most circumstances … [T]he province of Ontario has 11 open-heart surgery facilities, while the state of Pennsylvania, with roughly the same population as Ontario, has more than five times the number of heart surgery facilities. California is three times larger in population but has 10 times the number of heart surgery facilities. Given this difference in the number of facilities, it is simply impossible for physicians in Ontario to perform as many open heart surgery operations as those in Pennsylvania or California.\”

Also, not all cancer survival rates are better in the U.S. Squires writes: \”However, at 64 percent, the survival rate for cervical cancer in the U.S. was worse than the OECD median (66%), and well below the 78 percent survival rate in Norway—indicating significant room for improvement.\”

Administrative costs of health care are much higher in the U.S.
Squires doesn\’t mention this point, but it is a main emphasis for Cutler and Ly. They write:

\”[T]the U.S. healthcare system is in great need of administrative simplification. There are few other areas of the U.S. economy where waste is so apparent and the possibility of savings is so tangible. … Perhaps the most troubling difference between the U.S. and Canadian healthcare systems is the differential amount spent on administration. For every office-based physician in the United States, there are 2.2 administrative workers. That exceeds the number of nurses, clinical assistants, and technical staff put together. One large physician group in the United States estimates that it spends 12 percent of revenue collected just collecting revenue. Canada, by contrast, has only half as many administrative workers per office-based physician.  The situation is no better in hospitals. In the United States, there are 1.5 administrative personnel per hospital bed, compared to 1.1 in Canada. Duke University Hospital, for example, has 900 hospital beds and 1,300 billing clerks. On top of this are the administrative workers in health insurance. Health insurance administration is 12 percent of premiums in the United States and less than half that in Canada.

\”International comparisons of medical care occupations are difficult, but they suggest that the United States has more administrative personnel than other countries do. … [T]he United States has 25 percent more healthcare administrators than the United Kingdom, 165 percent more than the Netherlands, and 215 percent more than Germany. The number of clerks of all forms (including data entry clerks) is much higher in the United States as well.\”

\”What are all these administrative personnel doing? … One part is credentialing—receiving permission to practice medicine in a particular hospital or for a particular health plan. The average physician submits 18 credentialing applications annually—each insurer, hospital, ambulatory surgery facility, and the like, requires a different one—consuming 70 minutes of staff time and 11 minutes of physician time per application. Verifying eligibility for services is also costly. Insurance information must be verified for 20 to 30 patients daily, including three or four patients for whom verification must be sought orally. Because people change insurance plans frequently and the cost-sharing they are charged varies with plan and with past utilization (for example, how much of the deductible have they spent?), the determination of what to charge a patient is especially difficult. … Finally, significant time is spent on billing and payment collection. On average, about three claims are denied per physician per week and need to be rebilled. …  Three-quarters of denied bills are ultimately paid, but the administrative cost of securing the payment is very high. Provider groups in the United States employ 770 full-time equivalent workers per $1 billion collected, compared to an average in other U.S. industries of about 100. By all indications, the administrative burden is rising over time as insurance policies have become more complex, while the technology of administration has not kept pace.\”

Conclusion

The question of why the U.S. spends more than 50% more per person on health care than the next highest countries (Switzerland and Netherlands), and more than double per person what many other countries spend, may never have a simple answer. Still, the main ingredients of an answer are becoming more clear. The U.S. spends vastly more on hospitalization and acute care, with a substantial share of that going to high-tech procedures like surgery and imaging. The U.S. does a poor job of managing chronic conditions, which then lead to episodes of costly hospitalization. The U.S. also seems to spend vastly more on administration and paperwork, with much of that related to credentialing, documenting, and billing–which is again a particular important issue in hospitals. Any honest effort to come to grips with high and rising U.S. health care costs will have to tackle these factors head-on. 

Occupational Licensing and Low-Income Jobs

Pretty much everything I know about the economics of occupational licensing I learned from Morris Kleiner, a colleague from the days when I was based at the Humphrey School at the University of Minnesota. Morrie lays out many of the issues here in a Fall 2000 article in my own Journal of Economic Perspectives, as well as in his  2006 book, Licensing Occupations: Ensuring Quality or Restricting Competition?

He points out that nearly one-third of the U.S. labor force works in jobs where some form of government license is a requirement. Some of the largest occupations that require licenses include teachers, nurses, engineers, accountants, and lawyers.  Occupational licensing poses a potential tradeoff: on one side, requiring licenses offers a promise of a reliably high quality of service; on the other side, requiring licenses is a barrier to entry that tends to reduce the quantity of jobs in that occupation but increase the wage. Kleiner and others investigate this subject by looking at differences in licensing requirements for a certain occupation across states, and searching for evidence of wage and quality differences. A typical finding is that the wage differences are readily perceptible, but the quality differences are not. Licensing is distinguishable from certification: with certification, you are free to hire someone who doesn\’t possess the certification if you like, but with licensing, hiring someone without the license is illegal. As an example, travel agents and mechanics are often certified, but they are typically not licensed.

Dick M. Carpenter II, Ph.D., Lisa Knepper, Angela C. Erickson and John K. Ross focus on documenting differences between states in 102 of the job categories counted by the Bureau of Labor Statistics that requires a license in at least one state and that pay below-average wages. They report the results in License to Work: A National Study of Burdens from Occupational Licensing, a report from the Institute for Justice. They make the case that many of these occupational rule are more about limiting competition than about quality of service in an indirect way: they point out that licensing rules about fees, training, exams, minimum age, and minimum schooling vary enormously across states, with no particular evidence that reliability or safety are worse in states with lesser or no licensing requirements. The report goes into state-by-state and occupation-by-occupation detail, but here are some summary comments: 



\”The need to license any number of the occupations in this sample defies common sense. A short list would include interior designers, shampooers, florists, upholsterers, home entertainment installers, funeral attendants, auctioneers and interpreters for the deaf. Most of these occupations are licensed in just a handful of states; interpreters are licensed in only 16 states, while auctioneers are licensed in 33. If, as licensure proponents often claim, a license is required to protect the public health and safety, one would expect more consistency. For example, only five states require licenses for shampooers, but it is highly unlikely that conditions in those five states are any different …\”


\”Quite literally, EMTs [emergency medical technicians] hold lives in their hands, yet 66 other occupations have greater average licensure burdens than EMTs. This includes interior designers, barbers and cosmetologists, manicurists and a host of contractor designations. By way of perspective, the average cosmetologist spends 372 days in training; the average EMT a mere 33.\”



\”Licensure irrationalities are doubly evident in the inconsistencies by burden across states. Looking again at manicurists, while 10 states require four months or more of training, Alaska demands only about three days and Iowa about nine days. It seems unlikely that aspiring manicurists in Alabama (163 days) and Oregon (140 days) truly need so much more time in training. But manicurists are not alone. The education and experience requirements for animal trainers range from zero to almost 1,100 days, or three years. And for vegetation pesticide handlers, training obligations range from zero to 1,460 days, or four years, with fees up to $350. This high degree of variation is prevalent throughout
the occupations. Thirty-nine of them have differences of more than 1,000 days between the minimum and maximum number of days required for education and experience. And another 23 occupations have differences of more than 700 days.\”



\”Finally, irrationalities are particularly notable when few states license an occupation but do so onerously. One clear example is interior design, the most difficult of the 102 occupations to enter, yet licensed in only three states and D.C. Another is social service assistants, the fourth most difficult occupation to enter. It requires nearly three-and-a-half years of training but is only licensed in six states and D.C. Dietetic technicians must spend 800 days in education and training, making for the eighth most burdensome requirements, but they are licensed in only three states. Home entertainment installers must have about eight months of training on average, but only in three states. The seven states that license tree trimmers require, on average, more than a year of training.\”



\”The 102 occupational licenses studied require of aspiring workers, on average, $209 in fees, one exam and about nine months of education and training. ·· Thirty-five occupations require more
than a year of education and training, on average, and another 32 require three to nine months. At least one exam is required for 79 of the occupations. …
Particularly noteworthy is the percentage of low- and middle-income workers with less than a high school diploma—15.7 percent. As documented below, a number of the 102 occupations studied require the completion of at least 12th grade, a requirement that effectively bans a substantial number of people from those occupations.\”

\”[S]even of the 102 occupations studied are licensed in all 50 states and the District of Columbia:
pest control applicator, vegetation pesticide handler, cosmetologist, EMT, truck driver, school bus driver and city bus driver. Another eight occupations are licensed in 40 to 50 states. Thus, the vast majority of these occupations are licensed in fewer than 40 states, and five are licensed in only
one state each: florist, forest worker, fire sprinkler system tester, conveyor operator and non-contractor pipelayer. On average, the occupations on this list are licensed in about 22 states.\”

My own guess is that the politics of passing state-level occupational licensing laws is driven by three factors: 1) lobbying by those who already work in the occupation to limit competition; 2) passing laws in response to wildly unrepresentative anecdotes of terrible or dangerous service; and 3) the tendency when setting standards to feel like more is better. But in a U.S. economy which is hurting for job creation, especially jobs for low-income workers, states should be seriously rethinking many of their occupational licensing rules. Many would be better-replaced with lower standards, certification rather than licenses, or even no licenses at all. 

Teen Pregnancy: What Causes What?

Here is a classic problem of cause and effect. Teenagers who give birth are more likely to be from households with lower income levels. Also, teenagers who give tend to end up later in life in households with lower income levels. But does the lower income level cause teens to be more likely to give birth? Or does giving birth cause as a teen cause that woman to be more likely to end up in a lower-income household? How can one untangle cause and effect? Melissa S. Kearney and Phillip B. Levine tackle these questions in \”Why is the Teen Birth Rate in the United States So High and Why Does It Matter?\” which appears in the Spring 2012 issue of my own Journal of Economic Perspectives. They have lots of interesting comments to make about variation in teen birthrates across states and countries. Here, I\’ll focus on their analysis of the cause and effect question, which surprised me and offers a nice example of  how economist try to disentangle these sorts of issues.

\”Our reading of the totality of evidence leads us to conclude that being on a low economic trajectory in life leads many teenage girls to have children while they are young and unmarried and that poor outcomes seen later in life (relative to teens who do not have children) are simply the continuation of the original low economic trajectory. That is, teen childbearing is explained by the low economic trajectory but is not an additional cause of later difficulties in life. Surprisingly, teen birth itself
does not appear to have much direct economic consequence.\”

Conceptually, how would one tell whether giving birth as a teenager is a cause of lower future economic prospects? Just comparing life outcomes for teenage girls who give birth and those who don\’t will give you a correlation, but not causation.  \”A comparison of the outcomes of women who did and who did not give birth as teens is inherently biased by selection effects: teenage girls who “select” into becoming pregnant and subsequently giving birth (as opposed to choosing abortion) are different in terms of their background characteristics and potential future outcomes than teenage girls who delay childbearing.\” The problem is made more difficult because some of the background characteristics may be measurable in the data (like family income level, or ethnicity, or if it\’s a single-parent family) but many other characteristics are not available in the data (like the personality traits of the teenage girl or the values lived by the family).

 In an ideal experiment, one might want a research design in which a random sample of teenagers becomes pregnant and gives birth, and then you could track the outcomes. Of course, randomized pregnancy is an impractical research design! But here are four approaches used by clever economists to disentangle this question of cause and effect. 

A within-family approach. Look at life outcomes for sisters who give birth at different ages. The result of this kind of study is \”once background characteristics are controlled for, the differences are quite modest. Furthermore, even these modest differences likely overstate the costs of teen childbearing, since the sister who gives birth as a teen is likely to be “negatively” selected compared
to her sister who does not.\”

Miscarriages.  Of those teens who become pregnant, some will suffer miscarriages. Compare women who are similar in measured characteristics of family background, but some of whom gave birth as teenagers while others had a miscarriage. It turns out that their life outcomes look quite similar: that is, giving birth as a teenager doesn\’t appear to cause any additional decline in later life outcomes.

Age at first menstruation. Girls who menstruate earlier are at greater risk of becoming pregnant as teenagers. One can use a statistical approach to look at two groups of women who are similar in measured characteristics of family background, but where one group has a higher pregnancy rate because they began their menstrual cycle earlier. However, the life outcomes for these groups look quite similar; is not correlated with lower life outcomes: that is, a random chance of being more likely to give birth as a teenager (because of an earlier age of first menstruation) doesn\’t appear to cause any additional decline in later life outcomes.


 Propensity scores. Look at girls within a certain school, so that they live in more-or-less the same neighborhood. Using the available data, develop a \”propensity score\” that measures how likely a girl is to give birth as a teenager. Then compare the life outcomes for girls with similar propensity scores, some of whom gave birth and some of whom did not. There doesn\’t seem to be a difference in life outcomes, again suggesting that giving birth as a teenager doesn\’t much alter other life outcomes. 

Kearney and Levine sum up the evidence on cause and effect this way: \”Taken as a whole, previous research has had considerable difficulty finding much evidence in support of the claim that teen childbearing has a causal impact on mothers and their children. Instead, at least a substantial majority of the observed correlation between teen childbearing and inferior outcomes is the result of underlying differences between those who give birth as a teen and those who do not.\”

Kearney and Levine also offer an unexpected (to me) perspective on policies to reduce teen pregnancy:

\”Moreover, no silver bullet such as expanding access to contraception or abstinence education will solve this particular social problem. Our view is that teen childbearing is so high in the United States because of underlying social and economic problems. It reflects a decision among a set of girls to “drop-out” of the economic mainstream; they choose nonmarital motherhood at a young age instead of investing in their own economic progress because they feel they have little chance of advancement. This thesis suggests that to address teen childbearing in America will require addressing some difficult social problems: in particular, the perceived and actual lack of economic opportunity among those at the bottom of the economic ladder.\”

The statement about teenage girls \”choosing\” nonmarital motherhood should be understood not as a claim that all pregnant 15 year-olds carefully considered their life options and decided on pregnancy!  Instead, the economists\’ view of choice is that we all make groups of choices every day–say, choices about exercise and calories consumed–that make certain outcomes more likely. Decisions that are not well-considered, or that raise the risk of undesired side effects, still have a large ingredient of choice. For example, we typically view those who drive drunk as having made a \”choice.\”

The cause-and-effect evidence here suggests that for many women who give birth as teenagers, their life outcomes like level of education achieved, income, employment, and chance of marriage are already so constrained that they are not made worse off by having a child as a teenager. Encouragement about contraception or abstinence can help reduce teen pregnancy on the margin. But what many teen girls from low socioeconomic status backgrounds need is a reduced prospect of marginalization, and a greater chance for personal and economic advancement.

On the Job for 100 Issues of JEP

I was hired by Joseph Stiglitz 26 years ago to start a new economics journal, the Journal of Economic Perspectives. It took us a year from the starting line to mailing our first issue in the mail, but the Spring 2012 issue, now available on-line, is the 100th issue. Like all issues of JEP back to 1994, it is freely available to all, courtesy of the American Economic Association. The first three articles are about the journal: one by current editor David Autor on the effect of the journal within the economics profession, one by Joe Stiglitz remembering the early years and commenting on how the journal has evolved, and one by me called \”From the Desk of the Managing Editor.\”

Here are the two opening paragraphs and the closing paragraph of my essay:

\”Editing isn’t “teaching” and it isn’t “research,” so in the holy trinity of academic responsibilities it is apparently bunched with faculty committees, student advising, and talks to the local Kiwanis club as part of “service.” Yet for many economists, editing seems to loom larger in their professional lives. After all, EconLit indexes more than 750 academic journals of economics, which require an ever-shifting group of editors, co-editors, and advisory boards to function. Roughly one-third of the books in the annotated listings at the back of each issue of the Journal of Economic Literature are edited volumes.

Editors are gatekeepers, and editors are road-blocks—or perhaps these are essentially the same task. Editors shape “the literature,” both what and who is included and how it is presented. I’ve come to believe that “editing” is no more susceptible to a compact single defifi nition than “manufacturing” or “services.” But here is one take on the enterprise of editing from someone who has been sitting in the Managing Editor’s chair for all 100 issues of the Journal of Economic Perspectives since before the first issue of the journal mailed in Summer 1987. …

My job as Managing Editor of JEP has been a pride and a pleasure for these last 25 years. It’s consistently interesting work: after all, my job is to do close readings of the highly varied work of a succession of prominent economists who are trying to explain their thinking—and then to ask them questions until they explain it all to me! Editing an academic journal also offers the psychic frisson of leaving something behind: 100 issues and counting, to be precise. When I visit another college or university, I sometimes walk through the periodical stacks just to see JEP on the shelf. Running an academic journal for a long time offers a pleasing sense of place within the discipline of economics, spinning a web of personal contacts from the up-and-comers to the well-established in academic institutions around the world. Some of my friends refer to my job at the journal as “the guy who gets thanked” at the end of articles. There are worse epitaphs.\”

Spring 2012 Journal of Economic Perspectives

The Spring 2012 issue of my own Journal of Economic Perspectives is now freely available on-line, along with earlier issues back to 1994, courtesy of the American Economic Association. It\’s the 100th issue, and thus a bit of a landmark for the journal and for me personally, because I\’ve been the Managing Editor since the journal began. I\’ll blog about some of the individual papers in the next week or so. Here, I\’ll just provide the \”Table of Contents\” at the top, abstracts below, and links to the papers.  

 Symposium: 100 Issue of JEP

 The Journal of Economic Perspectives at 100 (Issues)
David Autor
Full-Text Access | Supplementary Materials
The Journal of Economic Perspectives and the Marketplace of Ideas: A View from the Founding
Joseph E. Stiglitz
Full-Text Access | Supplementary Materials
From the Desk of the Managing Editor
Timothy Taylor
Full-Text Access | Supplementary Materials

Symposium: International Trade


The Rise of Middle Kingdoms: Emerging Economies in Global Trade
Gordon H. Hanson
Full-Text Access | Supplementary Materials
 Putting Ricardo to Work
Jonathan Eaton and Samuel Kortum
Full-Text Access | Supplementary Materials
Gains from Trade When Firms Matter
Marc J. Melitz and Daniel Trefler
Full-Text Access | Supplementary Materials
Globalization and U.S. Wages: Modifying Classic Theory to Explain Recent Facts

Jonathan Haskel, Robert Z. Lawrence, Edward E. Leamer and Matthew J. Slaughter
Full-Text Access | Supplementary Materials

Articles


Why Is the Teen Birth Rate in the United States So High and Why Does It Matter?
Melissa S. Kearney and Phillip B. Levine
Full-Text Access | Supplementary Materials
Why Was the Arab World Poised for Revolution? Schooling, Economic Opportunities, and the Arab Spring
Filipe R. Campante and Davin Chor
Full-Text Access | Supplementary Materials
Using Internet Data for Economic Research
Benjamin Edelman
Full-Text Access | Supplementary Materials
Jonathan Levin: 2011 John Bates Clark Medalist
Liran Einav and Steve Tadelis
Full-Text Access | Supplementary Materials

Features


Retrospectives: The Introduction of the Cobb-Douglas Regression
Jeff Biddle
Full-Text Access | Supplementary Materials
Recommendations for Further Reading
Timothy Taylor
Full-Text Access | Supplementary Materials


The Journal of Economic Perspectives at 100 (Issues) 
David Autor
When I was a graduate student, I discovered that the Journal of Economic Perspectives embodied much of what I love about the field of economics: the clarity that pierces rhetoric to seek the core of a question; the rigor to identify the causal relationships, tradeoffs, and indeterminancies inherent in a problem; the self-assurance to apply the disciplinary toolkit to problems both sacred and profane; and the force of logic to reach conclusions that might be unexpected, controversial, or refreshingly bland. It never occurred to me in those years that one day I would edit the journal. While doing so is a privilege and a pleasure, I equally confess that it\’s no small weight to be the custodial parent of one of our profession\’s most beloved offspring. No less intimidating is the task of stipulating what this upstart youth has accomplished in its first 25 years and 100 issues in print. Like any empiricist, I recognize that the counterfactual world that would exist without the JEP is unknowable, but my strong hunch is that our profession would be worse off in that counterfactual world. In this essay, I reflect on the journal\’s accomplishments and articulate some of my own goals for the JEP going forward.
Full-Text Access | Supplementary Materials

The Journal of Economic Perspectives and the Marketplace of Ideas: A View from the Founding Joseph E. Stiglitz
I welcome the opportunity to join in the celebration of the twenty-fifth birthday of the Journal of Economic Perspectives. It is wonderful to see how this \”baby,\” which I, along with Carl Shapiro and Timothy Taylor, nurtured through its formative years—from 1984 (three years before the first issue in 1987) until I left in 1993—has grown up and become an established part of the economics profession. In founding the journal, we had many objectives, hopes, and ambitions. We were concerned about the increasing specialization within the economics profession. We sought to have complex and sometimes arcane or highly mathematical ideas translated into plain English, or at least that dialect of the language known as \”Economese\”—and in a way that was not only informative but engaging. We were worried too about a growing distance between economics and policy. At least a portion of economic research should be related to ideas that were, or should or would be, part of the national and global policy debates. We began with an explicit commitment to present a diversity of viewpoints, hence the word \”perspectives\” in the title. One of the goals we set out for ourselves was to disseminate developments within economics more rapidly. We never shied away from controversy at the journal, but we tried to ensure that the discussion was balanced.
 Full-Text Access | Supplementary Materials

From the Desk of the Managing Editor 
Timothy Taylor
Editing isn\’t \”teaching\” and it isn\’t \”research,\” so in the holy trinity of academic responsibilities it is apparently bunched with faculty committees, student advising, and talks to the local Kiwanis club as part of \”service.\” Yet for many economists, editing seems to loom larger in their professional lives. After all, EconLit indexes more than 750 academic journals of economics, which require an ever-shifting group of editors, co-editors, and advisory boards to function. Roughly one-third of the books in the annotated listings at the back of each issue of the Journal of Economic Literature are edited volumes. Here is one take on the enterprise of editing from someone who has been sitting in the Managing Editor\’s chair for all 100 issues of the Journal of Economic Perspectives since before the first issue of the journal mailed in Summer 1987.
Full-Text Access | Supplementary Materials

The Rise of Middle Kingdoms: Emerging Economies in Global Trade 
Gordon H. Hanson
In this paper, I examine changes in international trade associated with the integration of low- and middle-income countries into the global economy. Led by China and India, the share of developing economies in global exports more than doubled between 1994 and 2008. One feature of new trade patterns is greater South-South trade. China and India have booming demand for imported raw materials, which they use to build cities and factories. Industrialization throughout the South has deepened global production networks, contributing to greater trade in intermediate inputs. A second feature of new trade patterns is the return of comparative advantage as a driver of global commerce. Growth in low- and middle-income nations makes specialization according to comparative advantage more important for the global composition of trade, as North-South and South-South commerce overtakes North-North flows. China\’s export specialization evolves rapidly over time, revealing a capacity to speed up product ladders. Most developing countries hyper-specialize in a handful of export products. The emergence of low- and middle-income countries in trade reveals significant gaps in knowledge about the deep empirical determinants of export specialization, the dynamics of specialization patterns, and why South-South and North-North trade differ.
Full-Text Access | Supplementary Materials

Putting Ricardo to Work
Jonathan Eaton and Samuel Kortum
David Ricardo (1817) provided a mathematical example showing that countries could gain from trade by exploiting innate differences in their ability to make different goods. In the basic Ricardian example, two countries do better by specializing in different goods and exchanging them for each other, even when one country is better at making both. This example typically gets presented in the first or second chapter of a text on international trade, and sometimes appears even in a principles text. But having served its pedagogical purpose, the model is rarely heard from again. The Ricardian model became something like a family heirloom, brought down from the attic to show a new generation of students, and then put back. Nearly two centuries later, however, the Ricardian framework has experienced a revival. Much work in international trade during the last decade has returned to the assumption that countries gain from trade because they have access to different technologies. These technologies may be generally available to producers in a country, as in the Ricardian model of trade, our topic here, or exclusive to individual firms. This line of thought has brought Ricardo\’s theory of comparative advantage back to center stage. Our goal is to make this new old trade theory accessible and to put it to work on some current issues in the international economy. Full-Text Access | Supplementary Materials

Gains from Trade When Firms Matter
Marc J. Melitz and Daniel Trefler
The rising prominence of intra-industry trade and huge multinationals has transformed the way economists think about the gains from trade. In the past, we focused on gains that stemmed either from endowment differences (wheat for iron ore) or inter-industry comparative advantage (David Ricardo\’s classic example of cloth for port). Today, we focus on three sources of gains from trade: 1) love-of-variety gains associated with intra-industry trade; 2) allocative efficiency gains associated with shifting labor and capital out of small, less-productive firms and into large, more-productive firms; and 3) productive efficiency gains associated with trade-induced innovation. This paper reviews these three sources of gains from trade both theoretically and empirically. Our empirical evidence will be centered on the experience of Canada following its closer economic integration in 1989 with the United States—the largest example of bilateral intra-industry trade in the world—but we will also describe evidence for other countries.
 Full-Text Access | Supplementary Materials

Globalization and U.S. Wages: Modifying Classic Theory to Explain Recent Facts 
Jonathan Haskel, Robert Z. Lawrence, Edward E. Leamer and Matthew J. Slaughter
This paper seeks to review how globalization might explain the recent trends in real and relative wages in the United States. We begin with an overview of what is new during the last 10-15 years in globalization, productivity, and patterns of U.S. earnings. To preview our results, we then work through four main findings: First, there is only mixed evidence that trade in goods, intermediates, and services has been raising inequality between more- and less-skilled workers. Second, it is more possible, although far from proven, that globalization has been boosting the real and relative earnings of superstars. The usual trade-in-goods mechanisms probably have not done this. But other globalization channels—such as the combination of greater tradability of services and larger market sizes abroad—may be playing an important role. Third, seeing this possible role requires expanding standard Heckscher-Ohlin trade models, partly by adding insights of more recent research with heterogeneous firms and workers. Finally, our expanded trade framework offers new insights on the sobering fact of pervasive real-income declines for the large majority of Americans in the past decade.
Full-Text Access | Supplementary Materials

Why Is the Teen Birth Rate in the United States So High and Why Does It Matter? 
Melissa S. Kearney and Phillip B. Levine
Why is the rate of teen childbearing is so unusually high in the United States as a whole, and in some U.S. states in particular? U.S. teens are two and a half times as likely to give birth as compared to teens in Canada, around four times as likely as teens in Germany or Norway, and almost ten times as likely as teens in Switzerland. A teenage girl in Mississippi is four times more likely to give birth than a teenage girl in New Hampshire—and 15 times more likely to give birth as a teen compared to a teenage girl in Switzerland. We examine teen birth rates alongside pregnancy, abortion, and \”shotgun\” marriage rates as well as the antecedent behaviors of sexual activity and contraceptive use. We demonstrate that variation in income inequality across U.S. states and developed countries can explain a sizable share of the geographic variation in teen childbearing. Our reading of the totality of evidence leads us to conclude that being on a low economic trajectory in life leads many teenage girls to have children while they are young and unmarried. Teen childbearing is explained by the low economic trajectory but is not an additional cause of later difficulties in life. Surprisingly, teen birth itself does not appear to have much direct economic consequence. Our view is that teen childbearing is so high in the United States because of underlying social and economic problems. It reflects a decision among a set of girls to \”drop-out\” of the economic mainstream; they choose nonmarital motherhood at a young age instead of investing in their own economic progress because they feel they have little chance of advancement.
 Full-Text Access | Supplementary Materials

Why Was the Arab World Poised for Revolution? Schooling, Economic Opportunities, and the Arab Spring 
Filipe R. Campante and Davin Chor
What underlying long-term conditions set the stage for the Arab Spring? In recent decades, the Arab region has been characterized by an expansion in schooling coupled with weak labor market conditions. This pattern is especially pronounced in those countries that saw significant upheaval during the first year of the Arab Spring uprisings. We argue that the lack of adequate economic opportunities for an increasingly educated populace can help us understand episodes of regime instability such as the Arab Spring.
 Full-Text Access | Supplementary Materials

Using Internet Data for Economic Research
Benjamin Edelman
The data used by economists can be broadly divided into two categories. First, structured datasets arise when a government agency, trade association, or company can justify the expense of assembling records. The Internet has transformed how economists interact with these datasets by lowering the cost of storing, updating, distributing, finding, and retrieving this information. Second, some economic researchers affirmatively collect data of interest. For researcher-collected data, the Internet opens exceptional possibilities both by increasing the amount of information available for researchers to gather and by lowering researchers\’ costs of collecting information. In this paper, I explore the Internet\’s new datasets, present methods for harnessing their wealth, and survey a sampling of the research questions these data help to answer. The first section of this paper discusses \”scraping\” the Internet for data—that is, collecting data on prices, quantities, and key characteristics that are already available on websites but not yet organized in a form useful for economic research. A second part of the paper considers online experiments, including experiments that the economic researcher observes but does not control (for example, when Amazon or eBay alters site design or bidding rules); and experiments in which a researcher participates in design, including those conducted in partnership with a company or website, and online versions of laboratory experiments. Finally, I discuss certain limits to this type of data collection, including both \”terms of use\” restrictions on websites and concerns about privacy and confidentiality.
 Full-Text Access | Supplementary Materials

Jonathan Levin: 2011 John Bates Clark Medalist 
Liran Einav and Steve Tadelis
Jonathan Levin, the 2011 recipient of the American Economic Association\’s John Bates Clark Medal, has established himself as a leader in the fields of industrial organization and microeconomic theory. Jon has made important contributions in many areas: the economics of contracts and organizations; market design; markets with asymmetric information; and estimation methods for dynamic games. Jon\’s combination of breadth and depth is remarkable, ranging from important papers in very distinct areas such as economic theory and econometric methods to applied work that seamlessly integrates theory with data. In what follows, we will attempt to do justice not only to Jon\’s academic work, but also try to sketch a broader portrait of Jon\’s other contributions to economics as a gifted teacher, dedicated advisor, and selfless provider of public goods.
 Full-Text Access | Supplementary Materials

Retrospectives: The Introduction of the Cobb-Douglas Regression
Jeff Biddle
At the 1927 meetings of the American Economic Association, Paul Douglas presented a paper entitled \”A Theory of Production,\” which he had coauthored with Charles Cobb. The paper proposed the now familiar Cobb-Douglas function as a mathematical representation of the relationship between capital, labor, and output. The paper\’s innovation, however, was not the function itself, which had originally been proposed by Knut Wicksell, but the use of the function as the basis of a statistical procedure for estimating the relationship between inputs and output. The paper\’s least squares regression of the log of the output-to-capital ratio in manufacturing on the log of the labor-to-capital ratio—the first Cobb-Douglas regression—was a realization of Douglas\’s innovative vision that a stable relationship between empirical measures of inputs and outputs could be discovered through statistical analysis, and that this stable relationship could cast light on important questions of economic theory and policy. This essay provides an account of the introduction of the Cobb-Douglas regression: its roots in Douglas\’s own work and in trends in economics in the 1920s, its initial application to time series data in the 1927 paper and Douglas\’s 1934 book The Theory of Wages, and the early reactions of economists to this new empirical tool.
Full-Text Access | Supplementary Materials

Recommendations for Further Reading 
Timothy Taylor
 Full-Text Access | Supplementary Materials

True Cost of Electricity Generation

The price we pay for energy in part reflects the cost of production. But the primary sources energy involves other costs not captured in the purchase price: mainly health costs of pollution, but other environmental effects as well. Michael Greenstone and Adam Looney set out to compare the full costs of various ways of producing electricity in \”Paying Too Much for Energy? The True Costs of Our Energy Choices.\” It\’s available here at the Spring 2012 issue of Daedalus, and is also available here.

To set the stage, here\’s a comment on the health consequence of our primary sources of fuel for electrical generation (footnotes omitted): 

\”Our primary sources of energy impose significant health costs–particularly on infants and the elderly, our most vulnerable. For instance, even though many air pollutants are regulated under the Clean Air Act, fine particle pollution, or soot, still is estimated to contribute to roughly one out of every twenty premature deaths in the United States. Indeed, soot from coal power plants alone is estimated to cause thousands of premature deaths and hundreds of thousands of cases of illness each year. The resulting damages include costs from days missed at work and school due to illness, increases in emergency room and hospital visits, and other losses associated with premature deaths. In other countries the costs are still greater; recent research suggests that life expectancies in northern China are about five years shorter than in southern China due to the higher pollution levels in the north. The National Academy of Sciences recently estimated total non-climate change-related damages associated with energy consumption and use to be more than $120 billion in the United States in 2005. Nearly all of these damages resulted from the effects of air pollution on our health and wellness.\”

What do the true costs of the energy that we use for producing electricity look like if we include production costs, costs of air pollution not related to carbon emissions, and then a social cost of roughly $21/ton for carbon dioxide emissions related to the risk of climate change? In this figure, the solid black bar is the production cost of the electricity, the checkered area is the non-carbon health costs, and the light gray area is the costs of carbon emissions. For those interested in the underlying data, the private costs of production are from Greenstone and Looney\’s own work; the non-carbon health costs are based on estimates from a National Academy of Sciences report; and the costs of carbon are based on a U.S. government Interagency Working Group on the Social Cost of Carbon (with which Greenstone was heavily involved).

If you just look at production costs (dark bars), \”existing coal\” is clearly the least expensive option. If you take other costs into account, \”New Natural Gas\” is least expensive. When the horizontal axis refers to \”new,\” as in \”New Natural Gas\” or \”New Coal,\” it is referring to construction of a plant under the current regulatory regime for new plants. Thus, producing electricity from \”New Coal\” has a higher production cost than from \”Existing Coal,\” but the health and carbon costs are also lower.

Not all costs are included here, as the authors readily acknowledge For example, the health costs of air pollution don\’t include costs of mining coal or uranium, nor potential health risks for installers of solar panels. \”New Nuclear\” has no health costs added, not because such costs don\’t exist, but because they fall into the dreaded UTQ category, for \”Unable to Quantify.\” But it\’s always important to remember that absence of evidence is not evidence of absence. Clearly, nuclear power does have additional costs and risks, and mining coal and uranium have additional costs, and there are environmental risks and consequences of solar and wind power, too. I posted on March 14, 2012, about \”The Mundane Cost Obstacle to Nuclear Power.\”

The fourth and sixth columns are especially interesting to me, because they offer a realistic way to think about costs of solar and wind power. One main difficulty with solar and wind is that they are intermittent sources of electricity generation, and thus they must be combined with an alternative.
The fourth and sixth columns thus combine electricity generated by solar and with electricity generated from natural gas as a back-up for times when the sun isn\’t shining and the wind isn\’t blowing. If one only looks at production costs, these combinations aren\’t yet competitive with generating electricity with coal or natural gas, but if one looks at all private and social costs, the combination of wind and natural gas is already looking fairly competitive with coal.

My own proposed national energy policy is the \”The Drill-Baby Carbon Tax: A Grand Compromise on Energy Policy.\” The idea is to aggressively develop U.S. energy resources and at the same time to impose a tax that would reflect the environmental costs of using such resources.  Like many of my policy ideas, most people agree with only half of it–and they disagree about which half.  

The GM and Chrysler Bail-Outs

 The U.S. government first extended emergency loans to GM and Chrysler in 2008 and 2009, and then stage-managed their 2009 bankruptcies. How is that working out? Thomas H. Klier and James Rubenstein tell the story in \”Detroit back from the brink? Auto industry crisis and restructuring, 2008–11\” in the second quarter issue of Economic Perspectives from the Federal Reserve Bank of Chicago. I\’ll lift facts and background from their more detailed and dispassionate description and tell the story my own way.

The story really starts in the late 1990s. The big three traditional U.S. automakers– GM, Ford, and Chrysler–had held about 70-75% of the U.S. auto market through the 1980s and most of the 1990s, but then their market share began plunging, ultimately falling to just 45% of the market in 2009.   

When the Great Recession hit, demand for cars dropped off, financing dried up, and gasoline prices spiked all at the same time. All of the Big Three were experiencing large losses, but Ford had a larger cash reserve. GM and Chrysler weren\’t going to make it.

On December 19, 2008, the lame-duck President George W. Bush authorized the use of the Troubled Asset Relief Program to give loans to GM and Chrysler. For the record, the TARP legislation discussed support for \”financial firms,\” and had nothing to say about helping manufacturing firms. After the Obama administration came to office in early 2009, it gave TARP loans to GM and Chysler, too. Klier and Rubenstein report: \”GM ultimately received $50.2 billion through TARP, Chrysler $10.9 billion, and GMAC $17.2 billion.\”

But GM and Chrysler were still bleeding money, and so the federal government stage-managed their bankruptcies. Standard bankruptcy law, in a nutshell, is that the stockholders get wiped out and the debtors and bondholders take losses–but end up owning a restructured firm with renegotiated contracts and obligations. However, Chrysler and GM used a formerly obscure part of the U.S. Bankruptcy Code, Section 363(b), that had been used for the Lehman Brothers bankruptcy. Basically, a newly formed company would receive all the desirable assets from the company–properties, personnel, contracts–while the old company kept the toxic stuff. This approach created \”new\” Chrysler out of \”old\” Chrysler in a month; GM took five weeks.

The strategies for the two firms were quite different. The plan with Chrysler was basically to get Fiat to run the firm. The table shows the evolution of ownership of \”new\” Chrysler. In the table, VEBA stands for \”voluntary employees\’ beneficiary association,\” which is the legal form of the trust fund for retirement and health care of the United Auto Workers union. The old sad joke used to be that the big U.S. car companies were really a retirement fund with a car company attached: under this plan, the arrangement became explicit. Fiat was given a 20% ownership share for no cash payment, but with an agreement that it would run the firm and develop new products. The U.S. and Canadian governments took small ownership shares in exchange for the earlier loans they had made. Bondholders of secured debt got 29 cents on the dollar.

Part of the arrangement was that if Fiat met certain targets (sales, exports, developing fuel-efficient cars), then it could expand its ownership share of the firm. In May 2011, Chrysler paid back its government loans and Fiat bought out the remaining government ownership. Chrysler is again a car company primarily owned by, well, a car company, rather than a retirement fund.

GM was a different matter. In this case, the U.S. took 60.8% ownership while Canadian governments took another 11.7%. Thus, the moniker \”Government Motors\” was fully deserved. The VEBA trust got 17.5% ownership.  The GM bondholders, who in a standard bankruptcy arrangement would have ended up owning the firm, got the smallest slice.

In November 2010, GM had a stock offering and raised $24 billion, allowing the government to get rid of a bunch of its shares. But ultimately, even after the government shouldered out the GM bondholders, it seems unlikely to recoup the TARP money it loaned. Klier and Rubenstein write: \”In order for the government’s remaining 32 percent of the company to be worth $26.2 billion, representing all of the government’s remaining unrecovered investment, GM’s market capitalization would have to be approximately $81.9 billion. To achieve this market capitalization, the price of GM stock would have to exceed $52 per share, or more than twice its price in April 2012.\”

The bankruptcy did lead to dramatic changes. Here\’s a list of some changes (citations omitted):

  • \”GM’s North American bill for hourly labor declined from $16 billion in 2005 to $5 billion in 2010 …
  • \”Old GM had 111,000 hourly employees in 2005 and 91,000 in 2008. New GM had 75,000 immediately after bankruptcy in 2009 and 50,000 in 2010 …
  • \”GM had closed 13 of the 47 U.S. assembly and parts plants it operated in 2008.  GM’s Pontiac, Saturn, and Hummer brands were terminated, and Saab was sold. GM retained four nameplates in North America: Chevrolet, its mass-market brand; Cadillac, its premium brand; Buick; and GMC. …
  • \”GM also reduced its dealer network by about 25 percent. …
  • \”Detroit’s labor costs were now competitive with foreign producers operating within North America. Hourly labor costs ranged from $58 at Ford to $52 at Chrysler, compared with $55 for Toyota …\”

The policy question about GM and Chrysler is sometimes phrased as \”should they have been helped, or not.\” It\’s important to be clear that even though the two firms were helped, they still went into bankruptcy! If the firms hadn\’t received TARP loans, they would have gone into bankruptcy, too. Thus, the actual policy question here should be to compare the two stage-managed bankruptcy that did occur with what might have happened under a more standard bankruptcy procedure.

For example, it seems at least arguable that the accelerated bankruptcy process which occurred under extreme federal government pressure was faster and smoother than if the arrangements had been worked out in a standard bankruptcy court proceeding. It seems clear that the federal government shouldered out bondholders, who would have received more in a standard bankruptcy procedure, and thus created some uncertainty about how bondholders of other large firms might be treated in the future. On the other side, the UAW retirement funds did much better out of the stage-managed bankruptcy than they probably would have done in a standard bankruptcy. Fiat appears to have gotten a better deal under the stage-managed bankruptcy of Chrysler than it would have received in a standard bankruptcy. The stage-managed bankruptcy did lead to cost-cutting measures like plant closures, fewer employees, and more competitive wages for GM and Chrysler, but presumably these changes would have happened under a standard bankruptcy procedure, too–and perhaps they would have happened in a way that led to greater competitiveness for the firm moving forward.

The claim that the U.S. government \”saved\” GM and Chrysler is wildly overblown. The firms would have continued to exist if they had gone through a standard bankruptcy process. Were the TARP loans to GM and Chrysler and the government intervention in the bankruptcy process worth it? Part of the answer is the value you place on the faster bankruptcy process, or on how you feel about a process that gave bondholders less value and the UAW retirement fund more value than they probably would have received in a standard bankruptcy. But as another metric, let\’s say that the government ends up eventually losing $10 billion of its investment in GM, which has 50,000 hourly jobs in 2010. Say that in a standard bankruptcy, hourly jobs would have been slashed more sharply, down to 30,000. (Of course, it\’s possible that GM would have been managed differently under a standard bankruptcy, in such a way that jobs wouldn\’t have needed to be cut as sharply.) Saving 20,000 jobs at a cost of $10 billion works out to $500,000 in government spending per job saved. 

Inequality of Leisure

Inequality of incomes have risen in recent decades. Orazio Attanasio, Erik Hurst, and
Luigi Pistaferri provide evidence that inequality of consumption has risen as well. But here, I want to focus on another one of their arguments: the rise in inequality of leisure. But there\’s a twist here: those with more leisure, and who are benefiting from a disproportionate rise in leisure, tend to be those with lower skill levels. The evidence is in \”The Evolution of Income, Consumption, and Leisure Inequality in The US, 1980-2010,\” published as NBER Working Paper #17982 in April 2012. The paper is not freely available on-line, but many academics will have access through their libraries.

Here\’s a basic data table on hours of leisure per week, by gender and education level. An explanation from Attanasio, Hurst, and  Pistaferri follows:

\”[O]ur measure of leisure includes the actual time the individual spends in leisurely activities like watching television, socializing with friends, going to the movies, etc. A few things are of note from Table 1. First, in 1985, low educated men took only slightly more hours per week of leisure than high educated men. As above, we define high educated as those with more than 12 years of schooling. A similar pattern holds for women. However, by 2007, the leisure differences between high and low educated men are substantial. Specifically, low educated men experienced a 2.5 hours per week gain in leisure between 1985 and 2007. High educated men, during the same time period, experienced a 1.2 hour per week decline in leisure. The new effect is that leisure inequality increased dramatically after 1985. Again, similar patterns are found for women. …

\”Most of the increase in leisure occurred as a result of changes in the upper tail of the leisure distribution. A greater share of low educated men in 2003-7 are taking more than 50 hours per week of leisure than in 1985. This is not the case for higher educated men. If anything, there is slightly lower proportion of higher educated men taking more than 50 hours per week of leisure in 2003-7 than there was in 1985. … While it is true that the consumption of the high educated has grown rapidly relative to the consumption of the low educated, it is also true that leisure time of the low educated has grown rapidly relative to the leisure time of the low educated. … [A]s long as leisure has some positive value, the increase in consumption inequality between high and low educated households during the past few decades will overstate the true inequality in well being between these groups.\”

Just to be clear, Attanasio, Hurst, and Pistaferri are in no way making some foolish argument the rise in income and consumption inequality that benefits those at the top of the income scale shouldn\’t matter, because it is offset by greater inequality of leisure benefiting those who tend to be at the bottom of the income scale.

But although the U.S. economy has become much less equal with regard to income and consumption, it is worth remembering that these are not the only measures of well-being. For example, I posted on June 29, 2011, about how \”Inequality of Mortality\” has been greatly reduced. And as leisure has become less equally distributed in a way that tends to favor those with lower skill levels, those with a rising share of leisure are better off in that dimension of well-being, albeit in a way that isn\’t captured in income or consumption statistics.