International Minimum Wage Comparisons

How does the level of the minimum wage relative to other wages compare across higher-income countries around the world? Here are a couple of figures generated from the OECD website, using data for 2012.

As a starter, here\’s a comparison of minimum wages relative to average wages. New Zealand, France, and Slovenia are near the top, with a minimum wage equal to about half the average wage. The United States (minimum wage equal to 27% of the average wage) and Mexico (minimum wage equal to 19% of the average wage) are near the bottom.

However, average wages may not be the best comparison. The average wage in an economy with relatively high inequality, like the United States, will be pulled up by the wages of those at the top. Thus, some people prefer to look at minimum wages relative to the median wage, where the median is the wage level where 50% of workers receive more and 50% receive less. For wage distributions, which always include some extremely large positive values, the median wage will be lower than the average–and this difference between median and average will be greater for countries with more inequality.

Here\’s a figure comparing the minimum wage to the median wage across countries.  The highest minimum wage by this standard is Turkey (71% of the median wage) followed by France and New Zealand (about    60% of the median wage). The lowest three are the United States (38%), the Czech Republic (36%) and Estonia (36%).

This post isn\’t the place to rehearse arguments over the minimum wage one more time: if you want some of my thoughts on the topic, you can check earlier posts like \”Minimum Wage and the Law of Many Margins\” (February 27, 2013), \”Some International Minimum Wage Comparisons\” (May 29, 2013), \”Minimum Wage to $9.50? $9.80? $10?\” (November 5, 2012). Moreover, minimum wages across countries should also evaluated in the context of other government spending programs or tax provisions that benefit low-wage families.

However, I will note for US readers that the international comparisons here can give aid and comfort to both sides of the minimum wage argument in this country. Those who would like the minimum wage raised higher can point to the fact that the U.S. level remains relatively low compared to other countries. Those who would prefer not to raise the minimum wage higher can take comfort in the fact that, even after the minimum wage increased signed into law by President Bush in May 2007 and then phased in through 2009, the U.S. minimum wage relative to average or median wages remains comparatively low.

What\’s the Difference Between 2% and 3%?

If you calculated that the difference between 2% and 3% is 1%, you are of course arithmetically correct, but in an economic sense, you are missing the point. Herb Stein explained the difference in an 1992 essay about the work of Edward Dennison on economic growth. Stein wrote:

The difference between 2 percent and 3 percent is not 1 percent but 50 percent. That, of course, is not the result of research–at least, not Dennison\’s–but it is an often-neglected and important proposition that he emphasized. Its significance is that what seems a small increase in the growth rate–say, from 2 to 3 percent–is really a large increase. As a first approximatino, such an increase in the growth rate would require an increase of 50 percent in all the resources, effort, and attention that went into generating the 2 percent growth rate.

Dennison had died in 1992, and Stein\’s short remembrance, \”Memories of a Model Economist,\” was published in the Wall Street Journal, November 23, 1992. It was reprinted in On the Other Hand … (pp. 235-239), a 1995 collection of Stein\’s popular essays and writings published by the AEI Press.

One of the challenges of teaching basic economics is to explain why small differences in the annual rate of economic growth are so important. Stein\’s comment from Dennison is one way to focus attention on these issues. In the short run of a single year  the difference between 2% and 3% is indeed 1%, but when the issue is how to bring down the unemployment rate, raising the number of workers needed is a big deal. In the longer run of a decade or two, the key point to remember is that economic growth accumulates, year after year, so losing 1% every year means losing (approximately, not adjusted for compounding of growth rates) 10% after a decade and 20% after two decades.

When a nation falls behind in productivity growth over a sustained period of time, it is a matter of decades to make up that foregone productivity growth. (If you doubt it, consider the experience of the United Kingdom or Argentina during the earlier parts of the 20th  century, or think about a quarter-century of lethargic growth has affected perceptions and reality of Japan\’s economy.) No matter what your public policy goal–more for social programs, tax cuts, deficit reduction, rescuing Social Security and Medicare–the task is politically easier if the growth rate has been on average higher and the economic pie is therefore substantially larger. In the last few years,  U.S. economic policy has for good reason been focused on the aftereffects and lessons of the Great Recession. But looking ahead a couple of decades, the single most important factor for the health of the U.S. economy is whether we create an economic climate so that the rate of per capita growth can be 1 or 2% faster per year.

Is the Division of Labor a Form of Enslavement?

The idea that an economy functions through a division of labor, in which we each focus and specialize in certain tasks and then participate in a market to obtain the goods and services we want to consume, is fundamental to economic analysis. Indeed, the very first chapter of Adam Smith\’s 1776 classic The Wealth of Nations is titled \”Of the Division of Labor,\” and offers the famous example of how dividing up the tasks involved in making a pin is what makes a pin factory so much more productive than an individual who is making pins.

But what if the division of labor, with its emphasis on focusing on a particular narrow job, runs fundamentally counter to something in the human spirit? Karl Marx raised this possibility in The German Ideology (1846 Section 1, \”Idealism and Materialism,\” subsection on \”Private Property and Communism\”). Marx wrote:

“Further, the division of labor implies the contradiction between the interest of the separate individual or the individual family and the communal interest of all individuals who have intercourse with one another. … The division of labor offers us the first example of how, as long as man remains in natural society, that is, as long as a cleavage exists between the particular and the common interest, as long, therefore, as activity is not voluntarily, but naturally, divided, man\’s own deed becomes an alien power opposed to him, which enslaves him instead of being controlled by him. For as soon as the distribution of labor comes into being, each man has a particular, exclusive sphere of activity, which is forced upon him and from which he cannot escape. He is a hunter, a fisherman, a shepherd, or a critical critic, and must remain so if he does not want to lose his means of livelihood; while in communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticism after dinner, just as I have a mind, without ever becoming hunter fisherman, shepherd or critic. This fixation of social activity, this consolidation of what we ourselves produce into an objective power above us, growing out of our control, thwarting our expectations, bringing to naught our calculations, is one of the chief factors in historical development up till now.

Like so much of Marx\’s writing, this passage seems to me to give voice to a difficult concept that contains a substantial slice of truth; indeed, I had this quotation up on my office door for a time. But also like a lot of Marx, it seems to ignore or evade counterbalancing arguments.

I suspect we all know people who at times feel trapped by the division of labor. I can think offhand of several friends who aren\’t happy being lawyers, and a doctor who would have preferred not to become a doctor. When you\’re grinding out the quarterly reports or the semi-required stint of overtime, it\’s easy to feel trapped by the narrowness of the job.

But on the other side, the division of labor contains within it an opportunity to learn and specialize–to be the expert in your own field of study. This matters to me both as a consumer and as a worker. As a consumer, I don\’t want the noontime appointment with a doctor who was a shepherd this morning, a social critic this afternoon, and is planning to try a different set of jobs tomorrow. I want a doctor who works hard at being a doctor. I also want a car made by workers who have experience in their jobs, an and I want to drive that car across bridges designed by engineers who spend their working time focused on engineering. As a consumer, I like dealing with goods and services produced by specialists.

As a worker, being stuck in one narrow occupation may feel like a trap. But fluttering from job to job can be is a trap of a different kind–a trap of a string of shallow experiences. I don\’t mean to knock shallow experience: there are a lot of things worth trying only once, or maybe a few times. But you can\’t get 10 years of experience at any job if you switch jobs every year, or in Marx\’s illustration, several times per day. There\’s probably a happy medium here of finding some variation in one\’s tasks and building expertise in different areas, both in work and in hobbies, over a lifetime. But to me, Marx\’s advice sounds like telling an ADHD worker to \”find your bliss,\” and then watching that person flit like a butterfly on amphetamines.

Marx\’s challenge to the division of labor also sidesteps some practical issues. His  implication seems to be that what you choose to do as a worker can be detached from what society needs. It\’s not clear what a society does if on a given day, not enough people feel like showing up to be garbagemen or day care providers that day. Markets and pay and defined jobs are a mechanism of coordinating what is produced and consumed, and also for allowing that mechanism to evolve over time according to the range of jobs that people want to do as providers (given a certain wage) and the goods and services that people want in their economic role as consumers.

The division of labor can be constraining, but another fundamental principal of economics is that all choices involve giving up an opportunity to do something else. A world without a division of labor would just be constraining in a different and arguably less attractive way. If you would like some additional ruminations on moral issues surrounding labor markets, one starting point is this blog from last month, \”Are Labor Markets Exploitative?\”

Characteristics of U.S. Minimum Wage Workers

Set aside for a few heartbeats the vexed question of just how a minimum wage would affect employment, and focus on a more basic set of facts: What are some characteristics of U.S. workers who receive the minimum wage? The statistics here are from a short March 2014 report from the U.S. Bureau of Labor Statistics, \”Characteristics of Minimum Wage Workers, 2013.\” Of course, the facts about who is receiving the minimum wage also reveal who will be most directly affected by any changes.

How many workers are paid at or below the minimum wage?

The BLS reports that 75 million American workers were paid at an hourly rate in 2013, out of about 136 million total employed workers. Of that total, 3.3 million, or about 4.3%, were paid at the minimum wage or less. A figure from an April 3, 2014,  BLS newsletter puts that level in historical context–that is, the share of hourly-paid workers receiving the federal the minimum wage is lower than in most of the 1980s and 1990s, but it is a little higher than in much of the 2000s. Of course, shifts in the  the share of workers receiving the minimum wage in large part reflect changes in the level of the minimum wage. When the federal minimum wage increase signed into law by President Bush in 2007 was phased in during 2008 and 2009, more workers were then affected by the higher minimum wage.

Percentage of hourly paid workers with earnings at or below the federal minimum wage, by sex, 1979–2013 annual averages

What\’s the breakdown of those being paid the minimum wage by age? In particular, how many are teenagers or in their early 20s? 

Of the 3.3 million minimum-wage workers in 2013, about one-quarter were between the ages of 16-19,  another one-quarter were between the ages of 20-24, and half were over the age of 25.

What\’s the breakdown of those being paid the minimum wage by full-time and part-time work status? 

Of the 3.3 million minimum-wage workers in 2013, 1.2 million were full-time, and 2.1 million were part-time–that is, roughly two-thirds of minimum-wage workers are part-time.

What\’s the breakdown of those being paid the minimum wage across regions? 

For the country as a whole, remember, 4.3% of those being paid hourly wages get the minimum wage or less. If the states are divided into nine regions the share of hourly-paid workers getting the minimum wage in each region varies like this: New England, 3.3%; Middle Atlantic, 4.8%;  East North Central, 4.3%, West North Central, 4.6%; South Atlantic, 5.1%; East South Central, 6.3%; West South Central, 6.3%; Mountain, 3.9%; Pacific, 1.5%.

The BLS has state-by-state figures, too. There are two main reasons for the variation. Average wages can vary considerably across states, and in areas with lower wages, more workers end up with the minimum wage. In addition, 23 states have their own minimum wage that is set above the federal level. In those state, fewer workers (with exceptions often made in certain categories like food service workers who get tips) are paid below the federal minimum wage. It\’s an interesting political dynamic that many of those who favor a higher federal minimum wage are living in states where the minimum wage is above the federal level; in effect, they are advocating that states who have  not adopted the minimum wage policy preferred in their own state be required to do so.

In what industries are hourly-paid workers most likely to receive the minimum wage? 

Percentage of hourly paid workers with earnings at or below the federal minimum wage, by occupation, 2013 annual averages

Whatever one\’s feelings about the good or bad effects of raising the minimum wage, it seems fair to say that those effects will be disproportionately felt by a relatively small share of the workforce, disproportionately young and part-time, and disproportionately in southern states.

Why Longer Economics Articles?

Articles in leading academic economics journals have roughly tripled in length over the last 40 years. Here\’s a figure from the paper David Card and Stefano DellaVigna,  \”Page Limits on Economics Articles: Evidence from Two Journals,\” which appears in the Summer 2014 issue of the Journal of Economic Perspectives (28:3, 149-68). Five of the leading research journals in economics over the lasst 40 years are the Quarterly Journal of Economics, the Journal of Political Economy, Econometrics, the Review of Economic Studies, and the American Economic Review (AER). The authors do a \”standardized\” comparison that accounts for variations over time and across journals in page formatting. A typical article in one of these leading economic journals was 15-18 pages back in 1970, and now is about 50 pages.

I admit that this topic may be of more absorbing interest to me than to most other humans on planet Earth. I\’v been Managing Editor of the JEP since the start of the journal in 1987, and the bulk of my job is to edit the articles that appear in the journal. The length of JEP articles hasn\’t risen much at all during the last 27 years, while the length of articles in other journals has roughly doubled in that time. Am I doing something wrong? In an impressionistic way, what are some of the plausible reasons for additional length?

Ultimately, longer papers in academic research journals reflect an evolving consensus about what constitutes a necessary and useful presentation of research results. Over time, it is plausible that journal editors and paper referees have become more aggressive in requesting that additional materials should presented, additional hypotheses considered, additional statistical tests run, and the like.

An economics research paper back in the 1960s often made a point, and then stopped. An academic research paper in the second decade of the 21st century is more likely to spend a few pages setting the stage for their argument, setting the stage for the big question, give some sense in the introduction of the paper of main results, have a section discussing previous research, have a section giving a background theory, and so on.

Changes in information and computing technology have pushed economics papers to become longer. There is vastly more data available than in 1970, so academic papers need to spend additional space discussing data. There has been a movement in the last couple of decades toward \”experimental economics,\” in which economists vary certain parameters–either in a laboratory with a bunch of students, or often in a real-world setting–which also means reporting in the research paper what was done and what data was collected. With cheaper computing power and better software, it is vastly easier to run a wide array of statistical tests, which means that space is need to explain which tests were run, the differing results of the tests, and which results the author finds most persuasive.

In the past, the ultimate constraint on length of academic journals was the cost of printing and postage. But in web-world, where we live today, distribution of academic research can have a near-zero cost. Editors of journals that are primarily distributed on-line have less incentive to require short articles.

Finally, one should mention the theoretical possibility that academic writing has become bloated over time, filled with loose sloppiness, with unneeded and length excursions into technical jargon, and occasion bouts of unrestrained pompousness.

Whatever the underlying cause of the added length of articles in economics journals, it creates a conflict between the underlying purposes of research publications. One purpose of such publications is to create a record of what was done, so that the data, theory, and arguments are spelled out in detail. However, another purpose is to allow findings to be disseminated among other researchers, as well as students and policy-makers, so that the results can be more broadly considered and understood. Longer articles probably do a better job of creating a record of what was done, and why. But given that time limits are real for us all, it now takes more time to read an economics article than it did four decades ago. The added length of journal articles means that many more pages of economics research articles are published, and a smaller proportion of those pages are read. I skim many economics articles, but having the time and space to read an article from beginning to end feels like a rare luxury, and I suspect I\’m not alone.

The challenge is how to strike the right balance between the competing purposes of full documentation of research (which if unrestrained could easily run to hundreds of pages of data, statistics, and alternative theoretical models for a typical research paper), and the time limits faced by consumers of that research. Many modern research papers are organized in a way that allows or even encourages skimming to hit the high spots: for example, if you need to know right now about the details of the data collection, or the details of the theoretical model, or the details of statistics, you can skip past those sections.

Another option mentioned by Card and DellaVigna is the role of academic journals that go back to the old days, with a focus on presentation of key results, with all details available elsewhere. They write: \”There may be an interesting parallel in the field of social psychology. The top journal in this field, the Journal of Personality and Social Psychology, publishes relatively long articles, as do other influential journals in the discipline. In 1988, however, a new journal, Psychological Science, was created to mirror the format of Science. Research papers submitted to Psychological Science can be no longer than 4,000 words. … Psychological Science has quickly emerged as a leading journal in its area. In social psychology, journals publishing longer articles coexist with journals specializing in shorter, high-impact articles.\”

My own journal, the Journal of Economic Perspectives, offers articles that are meant as informed essays on a subject, and thus typically meant to be read from beginning to end. We hold to a constraint of about 1,000 published pages per year. (But even in JEP, we are becoming more likely to have added on-line appendices with details about data, additional statistical tests, and the like.) I sometimes say that JEP articles are a little like giving someone a tour of a house by walking around and looking in all the windows. You can get a good overview of the house in that way. But if you really want to know the place, you need to go into all the rooms and take a closer look.

The growing length of articles in economic research journals means that the profession has been giving greater priority to full presentation of the back-story of research, at the expense of readers. In one way or another, the pendulum is likely to swing back, in ways that make it easier for consumers of academic research to obtain a somewhat nuanced view of a range of research, without necessarily being buried in an avalanche of detail–but while still having that avalanche of detail available when desired.

Who is Holding the Large-Denomination Bills?

Most currency in major economies around the world is held in the form of large-denomination bills that ordinary people rarely use–or even see. Kenneth Rogoff documents the pattern as part of his short essay, \”Costs and benefits to phasing out paper currency,\” presented at a conference at the National Bureau of Economic Research in April 2014.

I think I\’ve held a $100 bill in my hand perhaps once in the last decade (and my memory is that the bill belonged to someone else). But the U.S. has $924.7 billion worth of $100 bills in circulation, which represent by value about 77% of all U.S. currency in circulation. In round numbers, say that the U.S. population is 300 million. That works out to roughly 3100 $100 bills, on average, for every man, woman, and child in the United States. Here\’s the table:

It\’s not just a U.S. phenomenon, either. Here are numbers for the euro. The euro has larger-denomination currency in common circulation than does the U.S., including 200- and 500-euro notes. More than half of all the euro notes in circulation, by value, are worth 100 euros or more, and 500-euro notes alone make up 30% of all euros in circulation.

One possible explanation for this phenomenon is that lots of currency is being held outside the borders of the United States and Europe. Given the large size of the bills, it probably isn\’t being used for ordinary transactions: most people wouldn\’t hand a $100-bill to a cab driver in Jakarta. But it could be used for holding wealth in a liquid but safe form in countries where other ways of holding wealth might seem risky. However, compared to the U.S. dollar and the euro, the widespread belief is that the Japanese yen is used much less widely outside of its home country. Even so, a hugely disproportionate share of Japan\’s currency in circulation is in large-denomination bills. A full 87% of the Japanese currency in circulation by value is in the form of 10,000-yen notes (roughly comparable in value to a $100 bill)

Rogoff points out that same pattern even arises in Hong Kong, as well. A Hong Kong dollar is worth about 13 cents U.S. More than half of all Hong Kong dollars in circulation by value are $1,000 bills.

It\’s easy enough to hypothesize explanations as to why so many large-denomination bills are in circulation, but the truth is that we don\’t really know the answer. It probably has something to do with the extent of tax evasion or illegal transactions, or a need for secrecy, or a a fear of other wealth being expropriated.  Adding up the value of the large-denomination bills in the U.S., Europe, and Japan, the total is in the neighborhood of $3 trillion. I find it hard to avoid the conclusion that there are some extraordinarily large stashes of cash, in the form of large bills, scattered around the world. I find it hard to imagine how this currency will ever be reintegrated into the banking system–there\’s just so much of it. This would seem to be a fertile field to plow for some Hollywood movie-maker looking for a real-world hook for a movie with underworld connections, a daring heist, lots of pictures of enormous amounts of cash, chase scenes, multiple double-crosses and triple-crosses, and a \”what do you do it now that you have now?\” ending.

For some previous posts about large denomination bills, see \”Who is Using $1 Trillion in U.S. Currency?\” (October 25, 2011) and \”The Soaring Number of $100 Bills\” (June 10, 2013).

Dodd-Frank: Unfinished and Unstarted Business

If a \”law\” defines what actions can be punished by the state, the Dodd-Frank financial reform law–officially the Wall Street Reform and Consumer Protection Act of 2010–was not actually a \”law.\” Instead, the legislation told regulators to write rules in 398 areas.  In turn, there are laws that govern the writing of such rules, like specified time periods for comments and feedback and revision So it\’s not a huge surprise, four years later, that the rule-making is not yet completed.

Still, it\’s a bit disheartening to read the  the fourth-anniversary report published by the Davis Polk law firm, which has been tracking the 398 rule requirements in Dodd-Frank since is passage. The report notes: \”Of the 398 total rulemaking requirements, 208 (52.3%) have been met with finalized rules and rules have been proposed that would meet 94 (23.6%) more. Rules have not yet been proposed to meet 96 (24.1%) rulemaking requirements.\” For example, bank regulators were required by the law to write 135 rules, of which 70 are currently finalized. The Commodity Futures Trading Commission was to write 60 rules, of which 50 are finalized. The Securities and Exchange Commission was to write 95 rules, of which 42 are finalized. Various other agencies were responsible for 108 rules, of which 46 are finalized.

Well, at least 208 rules are completed, right? Not so fast. A completed rule doesn\’t mean that business has yet figured out how to actually comply with the rule. For example, there is a completed rule which requires that banking organizations with over $50 billion in assets write a \”living will,\” which is a set of plans that would specify how their business would be contracted and then shut down, without a need for government assistance, if that situation arose in a future financial crisis. The 11 banks wrote up their living wills, and the Federal Reserve and the Federal Deposit Insurance Corporation rejected the plans as inadequate. They wrote up second set of living wills, and a few days ago, the Federal Reserve and FDIC again rejected the plans as inadequate.

Maybe part of the problem is that the banks are dragging their feet. But another part of the problem is that writing a rule that in effect says, \”do a satisfactory living will,\” still leaves open many issues about what would actually be satisfactory. One suspects that if and when the next crisis hits, there will suddenly be a bunch of reasons why these \”living wills\” don\’t quite apply as written.

Or consider the issue of the credit rating agencies like Standard & Poor\’s, Moody\’s and Fitch, which were central to the financial crisis because it was their decision to give securities backed by subprime mortgages a AAA rating that let these securities be so readily issued and broadly held. (For earlier posts on the credit rating agencies, see  here and here.) Dodd-Frank requires the Securities and Exchange Commission to issue rules about credit rating agencies, but the rules are not yet issued.

Or what about the rules that if banks issue a mortgage and then sell it off to be turned into a financial security, the bank has to continue to own at least a portion of that mortgage, so that it has some skin in the game. Barney Frank, of Dodd-Frank fame, has said: “To me, the single most important part of the bill was risk retention.” A rule was written, but then multiple regulators have defined the rules so that almost every mortgage issued can be exempt from the risk retention regulations.

Or what about private equity? The SEC issued a rule about the fees of private equity firms, but now seems to be backing off the rule.

What about reform of Fannie Mae and Freddie Mac, the giant quasi-public corporations that helped put together the securities backed by subprime mortgage, and then went bankrupt and needed a bailout from the federal government? Not covered by Dodd-Frank. What about the \”shadow banking\” sector–that is, financial institutions that accept funds which can be pulled out in the short run, but make investments that cannot be quickly liquidated, thus setting the stage for a potential financial run. They were at the heart of the financial crisis in 2008, and they are  not covered by Dodd-Frank.  What about the asset management industry and exchange-traded funds that invest in bond markets, which the Economist magazine has just warned \”may spawn the next financial crisis\”? Not covered by Dodd-Frank.

I don\’t mean to be wholly negative here. The Dodd-Frank rules as implemented will require that financial firms hold a bigger cushion of capital (although perhaps not enough bigger). A number of rules promise to keep a closer eye on the largest firms, so that they are less likely to unexpectedly go astray. Some rules about having financial derivative contracts be traded in more standardized and open ways should be good for those markets. There are other examples.

But all in all, I fear that most people have reacted to Dodd-Frank as a sort of Rorschach test where the word \”financial regulation\” are flashed in front of your eyes. If you  look at those words and react by saying \”we need more financial regulation,\” then you are a Dodd-Frank fan. If you look at those words and shudder, you are a Dodd-Frank opponent. odd-Frank allowed a bunch of pro-regulation Congressmen to take a bow by passing it, and a bunch of anti-regulation Congressment to take a bow by opposing it. But for those of who try to live our lives as radical moderates, the issue isn\’t to be generically in favor of regulation or generically against it, but to try to look at  actual regulations and whether they are well-conceived. In that task, the Dodd-Frank legislation mostly used fairly generic language of good intentions, ducked hard decisions, and handed off the hot potato of how financial regulation should actually be written to others.

Morgan Ricks, a law professor at Vanderbilt who studies financial regulation, put it this way: \”There is a growing consensus that new financial reform legislation may be in order. The Dodd-Frank Act of 2010, while well-intended, is now widely viewed to be at best insufficient, at worst a costly misfire.\” The only sure thing about the next financial crisis, whenever it comes, is that it won\’t look like the previous one. The legacy of the Dodd-Frank legislation, as it grinds through the rule-making process, is at best a modest reduction in the risks of such a crisis and the need for government bailouts.

LEVs in HOVs?

A LEV is a \”low-emission vehicle,\” usually referring to a hybrid electric-gas vehicle. HOV stands for \”high-occupancy vehicle,\” and refers to the lanes on highways set aside for vehicles with multiple riders. But what happens when LEVs with single occupant are allowed in the HOV lanes?

The fundamental problem here is that for the goal of reducing auto emissions, letting single-occupant LEVs use the HOV lanes imposes congestion costs on carpoolers. After all, the intention behind encouraging LEVs is to reduce carbon emissions. The primary reason for HOV lanes is to provide an incentive for carpooling and ride-sharing and thus to reduce traffic congestion. But as Antonio Bento, Daniel Kaffine,  Kevin Roth, and Matthew Zaragoza-Watkins point out in their paper, \”The Effects of Regulation in the Presence of Multiple Unpriced Externalities:  Evidence from the Transportation Sector,\” allowing LEVs in the HOV lanes adds to congestion in those lanes, which has costs in terms of time and emissions. Their article appears in the most recent American Economic Journal: Economic Policy (2014, 6(3): 1–29). The AEJ: Economic Policy isn\’t freely available on-line, but many readers will have access through library subscriptions.

Bento, Kaffine,  Roth, and Zaragoza-Watkins write (footnotes and citations omitted): \”Recently, in an attempt to reduce automobile-related emissions, policymakers have introduced policies to stimulate the demand for ultra-low-emission vehicles (ULEVs) such as gas-electric hybrids. A popular policy, in place in nine states and under consideration in six others, consists of allowing solo-hybrid drivers access to high-occupancy vehicle (HOV) lanes on major freeways. In this paper, we take advantage of the introduction of this policy in Los Angeles, California to study interactions between multiple unpriced externalities. …  \”Beginning August 10, 2005 and ending June 30, 2011, owners of hybrid vehicles achieving 45 miles per gallon (mpg) or better were able to apply for a special sticker that allowed them access to HOV lanes regardless of the number of occupants in the vehicle.\”

The authors had access to detailed data on the cars travelling on Los Angeles freeways. The basic analysis of the study is a \”regression discontinuity,\” which basically means looking at whether their is a discontinuous change in traffic levels at the time the policy began. \” So how do the benefits of lower emissions from more use of LEV cars and the costs of greater congestion in HOV lanes balance out?

Assume that every LEV observed n the HOV lane during the rush hour was purchased only because of this policy–which of course will overstate the benefits considerably–they find that the benefits in terms of reduce emissions are worth about $28,000 per year.

On the other side, the primary cost arises because the increase in travel time during the morning peak on the HOV lane is 9.0 percent and is statistically significant at the 1 percent level; this effect corresponds to an increase of travel time of 2.2 minutes.\” Of course, this is 2.2 minute per driver in the conga line of LA traffic in the HOV lanes during morning rush hour. Assuming a value of time of about $21/hour for others in the carpool lane accounts for a rise in congestion costs of about $3.3 million per year. This is partly offset by $1.7 million in reduced congestion times in the other traffic lanes, and by the gains in reduced driving time for the hybrid drivers now allowed in the the HOV lanes. But the costs still heavily outweigh the benefits.

Consider that when you allow LEVs in the HOV lane, the only time this provides a  positive incentive is if the driver is avoiding congestion in the other lanes. Unless the HOV lane is completely free-flowing, which is often not true in Los Angeles, adding more cars to that lane will add to congestion for those drivers. They write: \”While adding a single hybrid to any HOV lane at 2 am creates zero social costs of congestion, adding one daily hybrid driver at 7 am to a very congested road in our study area (the I-10W) generates $4,500 in annual social costs. On these exceptionally congested roads, HOV lane traffic may be up to 30 percent above socially optimal levels, implying significant congestion costs from allowing hybrid access.\”

The authors then calculate the costs of reducing various air pollutants (greenhouse gases, nitrogen oxides, and hydrocarbons) by letting LEVs drive in the HOV lanes. \”Our findings imply a best-case cost of $124 per ton of reductions in greenhouse gas emissions, $606,000 per ton of nitrogen oxides (NOx) reduction, and $505,000 per ton of hydrocarbon reduction in the most optimistic calculations. These costs exceed those of other options readily available to policymakers.\”

This analysis shows how economics clarifies the underlying reality of policy choices. Allowing LEVs in the HOV lanes does not require an explicit outlay of funds, and so often appears cost-free to local traffic authorities. Bento, Kaffine,  Roth, and Zaragoza-Watkins write: \”Further, a policy that was perceived as “free” was far from free. We find that it costs carpoolers $3–$9 for every $1 of benefit transferred to hybrid drivers.\”

If anyone was to propose that carpoolers should pay a special tax, with the money to be sent to those who buy LEVs, everyone would question their sanity. But this policy of allowing LEVs in the HOV lanes has the actual economic effect of taxing those in the carpool lane–not in terms of money, but in terms of time–and transferring gains to those who buy LEVs. In effect, it\’s a policy that tries to pay for reductions in emissions by increasing the costs of traffic congestion. It\’s muddled thinking.

(Full disclosure: The AEJ: Economic Policy is published by the American Economic Association, which also publishes the Journal of Economic Perspectives where I have worked as Managing Editor since 1986.)

Grade Inflation: Evidence from Two Policies

Grade inflation in U.S. higher education is a disturbing phenomenon. Here\’s a disribution of grades over time as compiled by Stuart Rojstaczer and Christopher Healy, and published a couple of years ago in \”Where A Is Ordinary: The Evolution of American College and University Grading, 1940-2009,\” in the  Teachers College Record  (vol. 114, July 2012, pp. 1-23).

They offer a discussion of causes and consequences of grade inflation. The causes include a desire for colleges to boost the post-graduate prospects of their students and the desire of faculty members to avoid the stress of arguing over grades. The consequence is that high grades carry less informational value, which affects decisions by students on how much to work, by faculty on how hard to prepare, and by future employers and graduate schools on how to evaluate students. Here\’s a link to a November 2011 post of my own on \”Grade Inflation and Choice of Major.\” Rojstaczer and Christopher Healy write:

Even if grades were to instantly and uniformly stop rising, colleges and universities are, as a result of five decades of mostly rising grades, already grading in a way that is well divorced from actual student performance, and not just in an average nationwide sense. A is the most common grade at 125 of the 135 schools for which we have data on recent (2006–2009) grades. At those schools, A’s are more common than B’s by an average of 10 percentage points. Colleges and universities are currently grading, on average, about the same way that Cornell, Duke, and Princeton graded in 1985. Essentially, the grades being given today assume that the academic performance of the average college student in America is the same as the performance of an Ivy League graduate of the 1980s.

For the sake of the argument, let\’s assume that you think grade inflation is a problem. What can be done? There are basically two policies that can be implemented at the level of a college or university. One policy is for the college to pass a policy clamping down on grades in some way. The other is for the college to provide more information about the context of grades: for example, by providing on the student\’s transcript both their own grade for the course and the average grade for the course. Wellesley College has tried the first approach, while Cornell University has tried the second.

Kristin F. Butcher, Patrick J. McEwan, and Akila  Weerapana discuss \”The Effects of an Anti-Grade-Inflation Policy at Wellesley College,\” in the Summer 2014 issue of the Journal of Economic Perspectives. (Full disclosure: I\’ve been Managing Editor of the JEP since the inception of the journal in 1987.) They write: \” Thus, the College implemented the following policy in Fall 2004: average  grades in courses at the introductory (100) level and intermediate (200) level with
at least 10 students should not exceed a 3.33, or a B+. The rule has some latitude.  If a professor feels that the students in a given section were particularly meritorious,  that professor can write a letter to the administration explaining the reasons for the  average grade exceeding the cap.\”

Here\’s a figure showing the distribution of grades in the relevant classes across majors, relative to the 3.33 standard, before the policy was enacted from Fall 1998 to Spring 2003. As the authors point out, the higher-grading and lower-grading department tend to be much the same across colleges and universities.

After the policy was put in place, here is how the path of grades evolved, where the \”treated departments refers to departments that had earlier been above the 3.3 standard, and the \”untreated departments are those that were already below the 3.3 standard. Overall, the higher grading departments remained higher-grading, but the gaps across departments were no longer as large.  

In the aftermath of the change, they find that students were less likely to take courses in the high-grading departments or to major in those departments. For example, economics gained enrollments at the expense of other social science departments. In addition, student evaluations of teachers dropped in the previously high-grading departments.

Talia Bar, Vrinda Kadiyali, and Asaf Zussman discuss \”Grade Information and Grade Inflation: The Cornell Experiment,\” in the Summer 2009 issue of the Journal of Economic Perspectives. As they write: \”In the mid-1990s, Cornell University’s Faculty Senate had a number of discussions about grade inflation and what might be done about it. In April 1996, the Faculty Senate voted to adopt a new grade reporting policy which had two parts: 1) the publication of course median grades on the Internet; and 2) the reporting of course median grades in students’ transcripts. …Curbing grade inflation was not explicitly stated as a goal of this policy. Instead, the stated rationale was that `students will get a more accurate idea of their performance, and they will be assured that users of the transcript will also have this knowledge.\’\”

For a sense of the effect of the policy, here are average grades at Cornell before and after the policy took effect. The policy doesn\’t seem to have held down average grades. Indeed, if you squint at the line a bit, it almost appears that grade inflation increased in the aftermath of the change.

What seems to have happened at Cornell is that when median grades for courses were publicly available, students took more of the courses where median grades where higher. They write: \”Our analysis finds that the provision of grade information online induced students to select leniently graded courses—or in other words, to opt out of courses they would have selected absent considerations of grades. We also find that the tendency to select leniently graded courses was

weaker for high-ability students. Finally, our analysis demonstrates that a significant share of the acceleration in grade inflation since the policy was adopted can be attributed to this change in students’ course choice behavior.\”

The implication of these two studies is that if an institution wants to reduce grade inflation, it needs to do more than just make information about average grades available. Indeed, making information about average grades available seems to induce students, especially students of lower ability, to choose more easy-grading courses.  But as the Wellesley researchers point out, unilateral disarmament in the grading wars is a tricky step. In a world where grade point average is a quick and dirty statistic to summarize academic performance, any school that acts independently to reduce grade inflation may find that its students are on average receiving lower grades than their peers in the same departments at other  institutions–and that potential future employers and graduate schools may not spend any time in thinking about the reasons why.

Note: For the record, I should note that there are no comprehensive data on grades over time. The Rojstaczer and Healy data is based on their own research, and involves fewer schools in the past and a shift in the mix over time. They write:  \”For the early part of the 1960s, there are 11–13 schools represented by our annual averages. By the early part of the 1970s, the data become more
plentiful, and 29–30 schools are averaged. Data quantity increases dramatically by the early 2000s with 82–83 schools included in our data set. Because our time series do not include the same schools every year, we smooth our annual estimates with a moving centered three-year average.\” Their current estimates include data from 135 schools, covering 1.5 million students. In the article, they compare their data to other available sources, and make a strong argument that their estimates are a fair representation. 

Web Search Market Share

One easy way to get a sense of the level of competition in a market is to look at whether one or a few firms hold most of the market. Here\’s are market shares for U.S. web search. Since 2007, Google has been rising from about half to about two-thirds. Microsoft has also see a rise, while Yahoo has been falling. The figure is from

The figure is from Dan Frommer\’s article, \”Google has run away with the web search market and almost no one is chasing,\” at the Quartz website on July 25, 2014. Frommer reports that Google had $37 billion in revenue from its websites last year, which is mostly revenues from web search. Potential entrants to the web search industry face at least three problems.

  • First, most users like what Google web search offers just fine. 
  • Second, it\’s a costly venture to launch a web search company. 
  • Third, Google controls the Chrome internet browser that uses Google as its default for web search, and Android, one of the leading operating systems for mobile devices, and most users just tend to go with default options in their software. Thus, it may take some disruptive web search innovation to challenge Google\’s position. Of course, potential entrants know that Google is already researching possible innovations, too.