Why Longer Economics Articles?

Articles in leading academic economics journals have roughly tripled in length over the last 40 years. Here\’s a figure from the paper David Card and Stefano DellaVigna,  \”Page Limits on Economics Articles: Evidence from Two Journals,\” which appears in the Summer 2014 issue of the Journal of Economic Perspectives (28:3, 149-68). Five of the leading research journals in economics over the lasst 40 years are the Quarterly Journal of Economics, the Journal of Political Economy, Econometrics, the Review of Economic Studies, and the American Economic Review (AER). The authors do a \”standardized\” comparison that accounts for variations over time and across journals in page formatting. A typical article in one of these leading economic journals was 15-18 pages back in 1970, and now is about 50 pages.

I admit that this topic may be of more absorbing interest to me than to most other humans on planet Earth. I\’v been Managing Editor of the JEP since the start of the journal in 1987, and the bulk of my job is to edit the articles that appear in the journal. The length of JEP articles hasn\’t risen much at all during the last 27 years, while the length of articles in other journals has roughly doubled in that time. Am I doing something wrong? In an impressionistic way, what are some of the plausible reasons for additional length?

Ultimately, longer papers in academic research journals reflect an evolving consensus about what constitutes a necessary and useful presentation of research results. Over time, it is plausible that journal editors and paper referees have become more aggressive in requesting that additional materials should presented, additional hypotheses considered, additional statistical tests run, and the like.

An economics research paper back in the 1960s often made a point, and then stopped. An academic research paper in the second decade of the 21st century is more likely to spend a few pages setting the stage for their argument, setting the stage for the big question, give some sense in the introduction of the paper of main results, have a section discussing previous research, have a section giving a background theory, and so on.

Changes in information and computing technology have pushed economics papers to become longer. There is vastly more data available than in 1970, so academic papers need to spend additional space discussing data. There has been a movement in the last couple of decades toward \”experimental economics,\” in which economists vary certain parameters–either in a laboratory with a bunch of students, or often in a real-world setting–which also means reporting in the research paper what was done and what data was collected. With cheaper computing power and better software, it is vastly easier to run a wide array of statistical tests, which means that space is need to explain which tests were run, the differing results of the tests, and which results the author finds most persuasive.

In the past, the ultimate constraint on length of academic journals was the cost of printing and postage. But in web-world, where we live today, distribution of academic research can have a near-zero cost. Editors of journals that are primarily distributed on-line have less incentive to require short articles.

Finally, one should mention the theoretical possibility that academic writing has become bloated over time, filled with loose sloppiness, with unneeded and length excursions into technical jargon, and occasion bouts of unrestrained pompousness.

Whatever the underlying cause of the added length of articles in economics journals, it creates a conflict between the underlying purposes of research publications. One purpose of such publications is to create a record of what was done, so that the data, theory, and arguments are spelled out in detail. However, another purpose is to allow findings to be disseminated among other researchers, as well as students and policy-makers, so that the results can be more broadly considered and understood. Longer articles probably do a better job of creating a record of what was done, and why. But given that time limits are real for us all, it now takes more time to read an economics article than it did four decades ago. The added length of journal articles means that many more pages of economics research articles are published, and a smaller proportion of those pages are read. I skim many economics articles, but having the time and space to read an article from beginning to end feels like a rare luxury, and I suspect I\’m not alone.

The challenge is how to strike the right balance between the competing purposes of full documentation of research (which if unrestrained could easily run to hundreds of pages of data, statistics, and alternative theoretical models for a typical research paper), and the time limits faced by consumers of that research. Many modern research papers are organized in a way that allows or even encourages skimming to hit the high spots: for example, if you need to know right now about the details of the data collection, or the details of the theoretical model, or the details of statistics, you can skip past those sections.

Another option mentioned by Card and DellaVigna is the role of academic journals that go back to the old days, with a focus on presentation of key results, with all details available elsewhere. They write: \”There may be an interesting parallel in the field of social psychology. The top journal in this field, the Journal of Personality and Social Psychology, publishes relatively long articles, as do other influential journals in the discipline. In 1988, however, a new journal, Psychological Science, was created to mirror the format of Science. Research papers submitted to Psychological Science can be no longer than 4,000 words. … Psychological Science has quickly emerged as a leading journal in its area. In social psychology, journals publishing longer articles coexist with journals specializing in shorter, high-impact articles.\”

My own journal, the Journal of Economic Perspectives, offers articles that are meant as informed essays on a subject, and thus typically meant to be read from beginning to end. We hold to a constraint of about 1,000 published pages per year. (But even in JEP, we are becoming more likely to have added on-line appendices with details about data, additional statistical tests, and the like.) I sometimes say that JEP articles are a little like giving someone a tour of a house by walking around and looking in all the windows. You can get a good overview of the house in that way. But if you really want to know the place, you need to go into all the rooms and take a closer look.

The growing length of articles in economic research journals means that the profession has been giving greater priority to full presentation of the back-story of research, at the expense of readers. In one way or another, the pendulum is likely to swing back, in ways that make it easier for consumers of academic research to obtain a somewhat nuanced view of a range of research, without necessarily being buried in an avalanche of detail–but while still having that avalanche of detail available when desired.

Who is Holding the Large-Denomination Bills?

Most currency in major economies around the world is held in the form of large-denomination bills that ordinary people rarely use–or even see. Kenneth Rogoff documents the pattern as part of his short essay, \”Costs and benefits to phasing out paper currency,\” presented at a conference at the National Bureau of Economic Research in April 2014.

I think I\’ve held a $100 bill in my hand perhaps once in the last decade (and my memory is that the bill belonged to someone else). But the U.S. has $924.7 billion worth of $100 bills in circulation, which represent by value about 77% of all U.S. currency in circulation. In round numbers, say that the U.S. population is 300 million. That works out to roughly 3100 $100 bills, on average, for every man, woman, and child in the United States. Here\’s the table:

It\’s not just a U.S. phenomenon, either. Here are numbers for the euro. The euro has larger-denomination currency in common circulation than does the U.S., including 200- and 500-euro notes. More than half of all the euro notes in circulation, by value, are worth 100 euros or more, and 500-euro notes alone make up 30% of all euros in circulation.

One possible explanation for this phenomenon is that lots of currency is being held outside the borders of the United States and Europe. Given the large size of the bills, it probably isn\’t being used for ordinary transactions: most people wouldn\’t hand a $100-bill to a cab driver in Jakarta. But it could be used for holding wealth in a liquid but safe form in countries where other ways of holding wealth might seem risky. However, compared to the U.S. dollar and the euro, the widespread belief is that the Japanese yen is used much less widely outside of its home country. Even so, a hugely disproportionate share of Japan\’s currency in circulation is in large-denomination bills. A full 87% of the Japanese currency in circulation by value is in the form of 10,000-yen notes (roughly comparable in value to a $100 bill)

Rogoff points out that same pattern even arises in Hong Kong, as well. A Hong Kong dollar is worth about 13 cents U.S. More than half of all Hong Kong dollars in circulation by value are $1,000 bills.

It\’s easy enough to hypothesize explanations as to why so many large-denomination bills are in circulation, but the truth is that we don\’t really know the answer. It probably has something to do with the extent of tax evasion or illegal transactions, or a need for secrecy, or a a fear of other wealth being expropriated.  Adding up the value of the large-denomination bills in the U.S., Europe, and Japan, the total is in the neighborhood of $3 trillion. I find it hard to avoid the conclusion that there are some extraordinarily large stashes of cash, in the form of large bills, scattered around the world. I find it hard to imagine how this currency will ever be reintegrated into the banking system–there\’s just so much of it. This would seem to be a fertile field to plow for some Hollywood movie-maker looking for a real-world hook for a movie with underworld connections, a daring heist, lots of pictures of enormous amounts of cash, chase scenes, multiple double-crosses and triple-crosses, and a \”what do you do it now that you have now?\” ending.

For some previous posts about large denomination bills, see \”Who is Using $1 Trillion in U.S. Currency?\” (October 25, 2011) and \”The Soaring Number of $100 Bills\” (June 10, 2013).

Dodd-Frank: Unfinished and Unstarted Business

If a \”law\” defines what actions can be punished by the state, the Dodd-Frank financial reform law–officially the Wall Street Reform and Consumer Protection Act of 2010–was not actually a \”law.\” Instead, the legislation told regulators to write rules in 398 areas.  In turn, there are laws that govern the writing of such rules, like specified time periods for comments and feedback and revision So it\’s not a huge surprise, four years later, that the rule-making is not yet completed.

Still, it\’s a bit disheartening to read the  the fourth-anniversary report published by the Davis Polk law firm, which has been tracking the 398 rule requirements in Dodd-Frank since is passage. The report notes: \”Of the 398 total rulemaking requirements, 208 (52.3%) have been met with finalized rules and rules have been proposed that would meet 94 (23.6%) more. Rules have not yet been proposed to meet 96 (24.1%) rulemaking requirements.\” For example, bank regulators were required by the law to write 135 rules, of which 70 are currently finalized. The Commodity Futures Trading Commission was to write 60 rules, of which 50 are finalized. The Securities and Exchange Commission was to write 95 rules, of which 42 are finalized. Various other agencies were responsible for 108 rules, of which 46 are finalized.

Well, at least 208 rules are completed, right? Not so fast. A completed rule doesn\’t mean that business has yet figured out how to actually comply with the rule. For example, there is a completed rule which requires that banking organizations with over $50 billion in assets write a \”living will,\” which is a set of plans that would specify how their business would be contracted and then shut down, without a need for government assistance, if that situation arose in a future financial crisis. The 11 banks wrote up their living wills, and the Federal Reserve and the Federal Deposit Insurance Corporation rejected the plans as inadequate. They wrote up second set of living wills, and a few days ago, the Federal Reserve and FDIC again rejected the plans as inadequate.

Maybe part of the problem is that the banks are dragging their feet. But another part of the problem is that writing a rule that in effect says, \”do a satisfactory living will,\” still leaves open many issues about what would actually be satisfactory. One suspects that if and when the next crisis hits, there will suddenly be a bunch of reasons why these \”living wills\” don\’t quite apply as written.

Or consider the issue of the credit rating agencies like Standard & Poor\’s, Moody\’s and Fitch, which were central to the financial crisis because it was their decision to give securities backed by subprime mortgages a AAA rating that let these securities be so readily issued and broadly held. (For earlier posts on the credit rating agencies, see  here and here.) Dodd-Frank requires the Securities and Exchange Commission to issue rules about credit rating agencies, but the rules are not yet issued.

Or what about the rules that if banks issue a mortgage and then sell it off to be turned into a financial security, the bank has to continue to own at least a portion of that mortgage, so that it has some skin in the game. Barney Frank, of Dodd-Frank fame, has said: “To me, the single most important part of the bill was risk retention.” A rule was written, but then multiple regulators have defined the rules so that almost every mortgage issued can be exempt from the risk retention regulations.

Or what about private equity? The SEC issued a rule about the fees of private equity firms, but now seems to be backing off the rule.

What about reform of Fannie Mae and Freddie Mac, the giant quasi-public corporations that helped put together the securities backed by subprime mortgage, and then went bankrupt and needed a bailout from the federal government? Not covered by Dodd-Frank. What about the \”shadow banking\” sector–that is, financial institutions that accept funds which can be pulled out in the short run, but make investments that cannot be quickly liquidated, thus setting the stage for a potential financial run. They were at the heart of the financial crisis in 2008, and they are  not covered by Dodd-Frank.  What about the asset management industry and exchange-traded funds that invest in bond markets, which the Economist magazine has just warned \”may spawn the next financial crisis\”? Not covered by Dodd-Frank.

I don\’t mean to be wholly negative here. The Dodd-Frank rules as implemented will require that financial firms hold a bigger cushion of capital (although perhaps not enough bigger). A number of rules promise to keep a closer eye on the largest firms, so that they are less likely to unexpectedly go astray. Some rules about having financial derivative contracts be traded in more standardized and open ways should be good for those markets. There are other examples.

But all in all, I fear that most people have reacted to Dodd-Frank as a sort of Rorschach test where the word \”financial regulation\” are flashed in front of your eyes. If you  look at those words and react by saying \”we need more financial regulation,\” then you are a Dodd-Frank fan. If you look at those words and shudder, you are a Dodd-Frank opponent. odd-Frank allowed a bunch of pro-regulation Congressmen to take a bow by passing it, and a bunch of anti-regulation Congressment to take a bow by opposing it. But for those of who try to live our lives as radical moderates, the issue isn\’t to be generically in favor of regulation or generically against it, but to try to look at  actual regulations and whether they are well-conceived. In that task, the Dodd-Frank legislation mostly used fairly generic language of good intentions, ducked hard decisions, and handed off the hot potato of how financial regulation should actually be written to others.

Morgan Ricks, a law professor at Vanderbilt who studies financial regulation, put it this way: \”There is a growing consensus that new financial reform legislation may be in order. The Dodd-Frank Act of 2010, while well-intended, is now widely viewed to be at best insufficient, at worst a costly misfire.\” The only sure thing about the next financial crisis, whenever it comes, is that it won\’t look like the previous one. The legacy of the Dodd-Frank legislation, as it grinds through the rule-making process, is at best a modest reduction in the risks of such a crisis and the need for government bailouts.

LEVs in HOVs?

A LEV is a \”low-emission vehicle,\” usually referring to a hybrid electric-gas vehicle. HOV stands for \”high-occupancy vehicle,\” and refers to the lanes on highways set aside for vehicles with multiple riders. But what happens when LEVs with single occupant are allowed in the HOV lanes?

The fundamental problem here is that for the goal of reducing auto emissions, letting single-occupant LEVs use the HOV lanes imposes congestion costs on carpoolers. After all, the intention behind encouraging LEVs is to reduce carbon emissions. The primary reason for HOV lanes is to provide an incentive for carpooling and ride-sharing and thus to reduce traffic congestion. But as Antonio Bento, Daniel Kaffine,  Kevin Roth, and Matthew Zaragoza-Watkins point out in their paper, \”The Effects of Regulation in the Presence of Multiple Unpriced Externalities:  Evidence from the Transportation Sector,\” allowing LEVs in the HOV lanes adds to congestion in those lanes, which has costs in terms of time and emissions. Their article appears in the most recent American Economic Journal: Economic Policy (2014, 6(3): 1–29). The AEJ: Economic Policy isn\’t freely available on-line, but many readers will have access through library subscriptions.

Bento, Kaffine,  Roth, and Zaragoza-Watkins write (footnotes and citations omitted): \”Recently, in an attempt to reduce automobile-related emissions, policymakers have introduced policies to stimulate the demand for ultra-low-emission vehicles (ULEVs) such as gas-electric hybrids. A popular policy, in place in nine states and under consideration in six others, consists of allowing solo-hybrid drivers access to high-occupancy vehicle (HOV) lanes on major freeways. In this paper, we take advantage of the introduction of this policy in Los Angeles, California to study interactions between multiple unpriced externalities. …  \”Beginning August 10, 2005 and ending June 30, 2011, owners of hybrid vehicles achieving 45 miles per gallon (mpg) or better were able to apply for a special sticker that allowed them access to HOV lanes regardless of the number of occupants in the vehicle.\”

The authors had access to detailed data on the cars travelling on Los Angeles freeways. The basic analysis of the study is a \”regression discontinuity,\” which basically means looking at whether their is a discontinuous change in traffic levels at the time the policy began. \” So how do the benefits of lower emissions from more use of LEV cars and the costs of greater congestion in HOV lanes balance out?

Assume that every LEV observed n the HOV lane during the rush hour was purchased only because of this policy–which of course will overstate the benefits considerably–they find that the benefits in terms of reduce emissions are worth about $28,000 per year.

On the other side, the primary cost arises because the increase in travel time during the morning peak on the HOV lane is 9.0 percent and is statistically significant at the 1 percent level; this effect corresponds to an increase of travel time of 2.2 minutes.\” Of course, this is 2.2 minute per driver in the conga line of LA traffic in the HOV lanes during morning rush hour. Assuming a value of time of about $21/hour for others in the carpool lane accounts for a rise in congestion costs of about $3.3 million per year. This is partly offset by $1.7 million in reduced congestion times in the other traffic lanes, and by the gains in reduced driving time for the hybrid drivers now allowed in the the HOV lanes. But the costs still heavily outweigh the benefits.

Consider that when you allow LEVs in the HOV lane, the only time this provides a  positive incentive is if the driver is avoiding congestion in the other lanes. Unless the HOV lane is completely free-flowing, which is often not true in Los Angeles, adding more cars to that lane will add to congestion for those drivers. They write: \”While adding a single hybrid to any HOV lane at 2 am creates zero social costs of congestion, adding one daily hybrid driver at 7 am to a very congested road in our study area (the I-10W) generates $4,500 in annual social costs. On these exceptionally congested roads, HOV lane traffic may be up to 30 percent above socially optimal levels, implying significant congestion costs from allowing hybrid access.\”

The authors then calculate the costs of reducing various air pollutants (greenhouse gases, nitrogen oxides, and hydrocarbons) by letting LEVs drive in the HOV lanes. \”Our findings imply a best-case cost of $124 per ton of reductions in greenhouse gas emissions, $606,000 per ton of nitrogen oxides (NOx) reduction, and $505,000 per ton of hydrocarbon reduction in the most optimistic calculations. These costs exceed those of other options readily available to policymakers.\”

This analysis shows how economics clarifies the underlying reality of policy choices. Allowing LEVs in the HOV lanes does not require an explicit outlay of funds, and so often appears cost-free to local traffic authorities. Bento, Kaffine,  Roth, and Zaragoza-Watkins write: \”Further, a policy that was perceived as “free” was far from free. We find that it costs carpoolers $3–$9 for every $1 of benefit transferred to hybrid drivers.\”

If anyone was to propose that carpoolers should pay a special tax, with the money to be sent to those who buy LEVs, everyone would question their sanity. But this policy of allowing LEVs in the HOV lanes has the actual economic effect of taxing those in the carpool lane–not in terms of money, but in terms of time–and transferring gains to those who buy LEVs. In effect, it\’s a policy that tries to pay for reductions in emissions by increasing the costs of traffic congestion. It\’s muddled thinking.

(Full disclosure: The AEJ: Economic Policy is published by the American Economic Association, which also publishes the Journal of Economic Perspectives where I have worked as Managing Editor since 1986.)

Grade Inflation: Evidence from Two Policies

Grade inflation in U.S. higher education is a disturbing phenomenon. Here\’s a disribution of grades over time as compiled by Stuart Rojstaczer and Christopher Healy, and published a couple of years ago in \”Where A Is Ordinary: The Evolution of American College and University Grading, 1940-2009,\” in the  Teachers College Record  (vol. 114, July 2012, pp. 1-23).

They offer a discussion of causes and consequences of grade inflation. The causes include a desire for colleges to boost the post-graduate prospects of their students and the desire of faculty members to avoid the stress of arguing over grades. The consequence is that high grades carry less informational value, which affects decisions by students on how much to work, by faculty on how hard to prepare, and by future employers and graduate schools on how to evaluate students. Here\’s a link to a November 2011 post of my own on \”Grade Inflation and Choice of Major.\” Rojstaczer and Christopher Healy write:

Even if grades were to instantly and uniformly stop rising, colleges and universities are, as a result of five decades of mostly rising grades, already grading in a way that is well divorced from actual student performance, and not just in an average nationwide sense. A is the most common grade at 125 of the 135 schools for which we have data on recent (2006–2009) grades. At those schools, A’s are more common than B’s by an average of 10 percentage points. Colleges and universities are currently grading, on average, about the same way that Cornell, Duke, and Princeton graded in 1985. Essentially, the grades being given today assume that the academic performance of the average college student in America is the same as the performance of an Ivy League graduate of the 1980s.

For the sake of the argument, let\’s assume that you think grade inflation is a problem. What can be done? There are basically two policies that can be implemented at the level of a college or university. One policy is for the college to pass a policy clamping down on grades in some way. The other is for the college to provide more information about the context of grades: for example, by providing on the student\’s transcript both their own grade for the course and the average grade for the course. Wellesley College has tried the first approach, while Cornell University has tried the second.

Kristin F. Butcher, Patrick J. McEwan, and Akila  Weerapana discuss \”The Effects of an Anti-Grade-Inflation Policy at Wellesley College,\” in the Summer 2014 issue of the Journal of Economic Perspectives. (Full disclosure: I\’ve been Managing Editor of the JEP since the inception of the journal in 1987.) They write: \” Thus, the College implemented the following policy in Fall 2004: average  grades in courses at the introductory (100) level and intermediate (200) level with
at least 10 students should not exceed a 3.33, or a B+. The rule has some latitude.  If a professor feels that the students in a given section were particularly meritorious,  that professor can write a letter to the administration explaining the reasons for the  average grade exceeding the cap.\”

Here\’s a figure showing the distribution of grades in the relevant classes across majors, relative to the 3.33 standard, before the policy was enacted from Fall 1998 to Spring 2003. As the authors point out, the higher-grading and lower-grading department tend to be much the same across colleges and universities.

After the policy was put in place, here is how the path of grades evolved, where the \”treated departments refers to departments that had earlier been above the 3.3 standard, and the \”untreated departments are those that were already below the 3.3 standard. Overall, the higher grading departments remained higher-grading, but the gaps across departments were no longer as large.  

In the aftermath of the change, they find that students were less likely to take courses in the high-grading departments or to major in those departments. For example, economics gained enrollments at the expense of other social science departments. In addition, student evaluations of teachers dropped in the previously high-grading departments.

Talia Bar, Vrinda Kadiyali, and Asaf Zussman discuss \”Grade Information and Grade Inflation: The Cornell Experiment,\” in the Summer 2009 issue of the Journal of Economic Perspectives. As they write: \”In the mid-1990s, Cornell University’s Faculty Senate had a number of discussions about grade inflation and what might be done about it. In April 1996, the Faculty Senate voted to adopt a new grade reporting policy which had two parts: 1) the publication of course median grades on the Internet; and 2) the reporting of course median grades in students’ transcripts. …Curbing grade inflation was not explicitly stated as a goal of this policy. Instead, the stated rationale was that `students will get a more accurate idea of their performance, and they will be assured that users of the transcript will also have this knowledge.\’\”

For a sense of the effect of the policy, here are average grades at Cornell before and after the policy took effect. The policy doesn\’t seem to have held down average grades. Indeed, if you squint at the line a bit, it almost appears that grade inflation increased in the aftermath of the change.

What seems to have happened at Cornell is that when median grades for courses were publicly available, students took more of the courses where median grades where higher. They write: \”Our analysis finds that the provision of grade information online induced students to select leniently graded courses—or in other words, to opt out of courses they would have selected absent considerations of grades. We also find that the tendency to select leniently graded courses was

weaker for high-ability students. Finally, our analysis demonstrates that a significant share of the acceleration in grade inflation since the policy was adopted can be attributed to this change in students’ course choice behavior.\”

The implication of these two studies is that if an institution wants to reduce grade inflation, it needs to do more than just make information about average grades available. Indeed, making information about average grades available seems to induce students, especially students of lower ability, to choose more easy-grading courses.  But as the Wellesley researchers point out, unilateral disarmament in the grading wars is a tricky step. In a world where grade point average is a quick and dirty statistic to summarize academic performance, any school that acts independently to reduce grade inflation may find that its students are on average receiving lower grades than their peers in the same departments at other  institutions–and that potential future employers and graduate schools may not spend any time in thinking about the reasons why.

Note: For the record, I should note that there are no comprehensive data on grades over time. The Rojstaczer and Healy data is based on their own research, and involves fewer schools in the past and a shift in the mix over time. They write:  \”For the early part of the 1960s, there are 11–13 schools represented by our annual averages. By the early part of the 1970s, the data become more
plentiful, and 29–30 schools are averaged. Data quantity increases dramatically by the early 2000s with 82–83 schools included in our data set. Because our time series do not include the same schools every year, we smooth our annual estimates with a moving centered three-year average.\” Their current estimates include data from 135 schools, covering 1.5 million students. In the article, they compare their data to other available sources, and make a strong argument that their estimates are a fair representation. 

Web Search Market Share

One easy way to get a sense of the level of competition in a market is to look at whether one or a few firms hold most of the market. Here\’s are market shares for U.S. web search. Since 2007, Google has been rising from about half to about two-thirds. Microsoft has also see a rise, while Yahoo has been falling. The figure is from

The figure is from Dan Frommer\’s article, \”Google has run away with the web search market and almost no one is chasing,\” at the Quartz website on July 25, 2014. Frommer reports that Google had $37 billion in revenue from its websites last year, which is mostly revenues from web search. Potential entrants to the web search industry face at least three problems.

  • First, most users like what Google web search offers just fine. 
  • Second, it\’s a costly venture to launch a web search company. 
  • Third, Google controls the Chrome internet browser that uses Google as its default for web search, and Android, one of the leading operating systems for mobile devices, and most users just tend to go with default options in their software. Thus, it may take some disruptive web search innovation to challenge Google\’s position. Of course, potential entrants know that Google is already researching possible innovations, too. 

The Decline of U.S. Entrepreneurship

U.S. entrepreneurship is on a downward trend, and has been so for several decades. Ryan Decker, John Haltiwanger, Ron Jarmin, and Javier Miranda offer an overview of the evidence in \”The Role of Entrepreneurship in US Job Creation and Economic Dynamism,\” appearing in the Summer 2014 issue of the Journal of Economic Perspectives The authors write: \”Evidence along a number of dimensions and a variety of sources points to a US economy that is becoming less dynamic. Of particular interest are declining business startup rates and the resulting diminished role for dynamic young businesses in the economy.\” What is some of this evidence?

There is no official data on \”entrepreneurship.\” Thus, the data on the size and age of firms needs to be treated with care. For example, a big company opening up one more store certainly isn\’t what most of us mean by an \”entrepreneur.\” Nor is a new company formed because of a merger or a spin-off from existing companies. In addition, not all small firms are what we commonly think of as entrepreneurs. For example, many small family-run companies have been around for a substantial period of time, and don\’t really expect to experience large-scale growth. Thus, Decker, Haltiwanger, Jarmin and Miranda focus on the data on the age of firms, and they strip out new firms created out of old ones. When the data is sliced in this way, the note that business startup rate is down (citations omitted).

\”The firm startup rate is measured by the number of new firms divided by the total number of firms. Our calculations based on the Business Dynamic Statistics data show that the annual startup rate declined from an average of 12.0 percent in the late 1980s to an average of 10.6 percent just before the Great Recession, when it plummeted below 8 percent. We also find that the startup rate has declined in all major sectors. We note, however, that in high-tech sectors, the startup rate only began to decline in the post 2000 period. Meanwhile, the average size of startups, as measured by employment, has either remained approximately the same over this time period as measured by the Census
Bureau’s Business Dynamics Statistics data or has declined as measured by the Bureau of Labor Statistics’ Business Employment Dynamics. Either way, the lower startup rate is not being offset by a larger size of startup firms.\”

As a result, young firms have been playing a smaller role in the U.S. economy over time. The line with diamonds shows the share of firms that are less than five years old in the U.S. economy, and the decline since the 1980s. The line with the squares shows the share of job creation from young firms, and its decline since the 1980s. The solid lines shows the share of employment in young firms, and its decline since the 1980s.

Two of the biggest problems for the U.S. economy in recent years are a slow pace of job creation and a slow pace of productivity growth. Entrepreneurship is closely linked to both. New firms and young firms as a group have traditionally played a major role in the creation of new jobs. The authors point to an \”up-or-out\” dynamic where many new firms fail, but those who succeed have a lasting effect.

\”For new firms (that is, those with age equal to zero in the Business Dynamics Statistics) … their net job creation is also 2.9 million jobs per year. Over these 30 years [from 1980 to 2010], average net job creation in the entire US private sector was approximately 1.4 million jobs per year. The implication is that cohorts of firms aged one year or older typically exhibit net job declines. … For any given cohort, jobs lost due to the high failure rate of young firms are almost offset by the growth of the surviving firms. Five years after the entry of a typical cohort, total employment is about 80 percent of the original employment contribution of the cohort—in spite of losing about 50 percent of the original employment to business exits.\”

Moreover, young firms are not only more likely to have high productivity than existing firms, they also create competitive pressure on existing firms to keep measuring up.

Ian Hathaway and Robert Litan present some complementary evidence on the U.S. decline in entrepreneurship in \”The Other Aging of America: The Increasing  Dominance of Older Firms,\” a discussion paper just released by the Brookings Institution. Here\’s one figure showing that older firms are a rising share of US firms in recent decades, while younger firms are a smaller share of firms. The second figure shows that a greater share of US employment is now at older firms, rather than at younger ones.

The public policy issues of how to increase the rate of business start-ups–which needs to involve thinking about how a wide variety of government rules and laws on hiring, production, and land use can create unnecessarily high hurdles for entrepreneurs–needs to wait for another day. Decker, Haltiwanger, Jarmin, and Miranda point out that the start-up rate is falling, and Hathaway and Litan offer a figure showing that it\’s a lack of start-ups, not a greater chance of failure, which is whittling down the role of young firms in the U.S. economy.

(Full disclosure: I\’ve been Managing Editor of the JEP since the first issue in 1987. All JEP articles back to the first issue are freely available on-line, compliments of the American Economic Association.)

The Enigma of Russia\’s Economy

The usual and most basic starting point to understanding a national economy is to look at trends and patterns over a long period of time, and then to look at episodes of divergence from those long-term trends. But applying this most basic analysis to Russia\’s economy doesn\’t work well.

For starters, that economy underwent a dramatic transformation in 1991, when the Soviet Union broke up into constituent parts and the economy moved away from central planning. It simply isn\’t meaningful to compare pre-1991 economic statistics generated by the intentions of central planners to post-1991 economic statistics based on the results of market patterns. Moreover, in the less-than-quarter-century one can search for long-run patterns in Russia\’s economic statistics, the economic data is dominated by short-term factors: Russia\’s bumpy transition away from central planning in the 1990s; its debt default in 1998; the ongoing contraction in the rule of law after Vladimir Putin became president in 2000; the take-off in global oil prices in the 2000s, which greatly benefited Russia as an oil exporter; and the effects on Russia of the global financial crisis around 2008-2009.

In a \”Reality Check\” article  by Clifford G. Gaddy and Barry W. Ickes in the Summer 2014 issue of the Milken Institute Review, they present a graph of average annual growth rates in Russia\’s economy where many of these factors are at play. The sharp declines in economic growth in the 1990s are probably overstated, because Russia\’s economic output in 1991 was inflated by all the peculiar practices of the earlier central planners, and Russia\’s economic statistics for the 1990s failed to capture much  of the unreported \”underground\” economy.   Russia\’s economic growth in the 2000s looks so good in large part because of the boom in oil prices.

A recent IMF study trying to identify Russia\’s growth patterns put it this way: \”Russia’s trend growth is volatile. The transition from a centrally planned to a market economy started in 1991, hence time series are relatively short compared to other countries. The size and depth of structural reforms during the 1990’s, the financial crisis and default in 1998, followed by the oil boom in the 2000’s, and subsequent GFC [global financial crisis], make identification of a stable “long run” growth trend a very hard task.\”

But with these deep uncertainties about understanding the underlying path of Russia\’s economy duly noted, it seems plausible that Russia\’s economic growth is struggling. The chart above showing rates of economic growth shows slower growth since the 2008 downturn, and especially slow growth in 2013.
Gaddy and Ickes write: \”In the final months of 2013, before the confrontation created by the annexation of Crimea, Russia’s leaders were focused on a different crisis: the surprising and disturbing fact that an economy they thought was going to grow at 4 percent might not even expand by 1 percent. This “growth crisis” sparked heated exchanges –but no consensus – about how to restore momentum to the economy.\”

The IMF report says it this way: \”[T]he Russian economy appears to be operating close to full capacity, amid weak investment, and low growth.\” With a low birthrate, ill-health, and an aging workforce, Russia\’s workforce is declining, and will fall by about 20% between 2005 and 2025. Here\’s a workforce figure from the IMF:

One way to get a sense of the dismal state of Russia\’s private sector is to look at the ratio of stock price to earnings for Russian companies. For the stock markets of emerging market economies as a whole, the price/earnings ratio is about 12. As one example, the stock market of Zimbabwe has a price earning ratio of 12. However, in the stock markets of economies like Iran, Argentina, and Russia, the price/earnings ratio is about 5-6. In other words, when investors look at earnings from a Russian company, they are much less certain than they would be when thinking about the typical emerging market economy about whether those earnings reflect an underlying economic reality (as opposed to a set of accounting tricks), or whether the profits are likely to continue.

Of course, it\’s easy to generate a list of proposals to boost Russia\’s economy, but most such proposals either have no chance of being enacted, or wouldn\’t help Russia\’s economy, or both. Gaddy and Ickes sum up Russian policy proposals in this way:

One can’t help but be reminded of the old Soviet joke about the collective farm director and his chickens. The chickens are dying at an alarming rate, so much so that Moscow sends in its top expert. “I have an idea,” the expert says. “Switch out the rectangular troughs for triangular ones.” He promises to come back in two weeks to monitor the progress. “So?” he asks on his return. “It didn’t work,” the director replies. “The chickens kept dying.” “I have a better idea,” the expert says. “Paint the coops green.” Two weeks pass, and he’s back. “The chickens kept dying,” the director says. Again, a new idea. Again he returns to hear that the chickens keep dying. One day, the expert comes back, and the director announces, “All the chickens are dead.” “What a shame,” the expert says. “I had so many more great ideas. …”

As Gaddy and Ickes add: \”Today’s experts pose no similar threat, since their “great ideas” will never be put into practice.\” (If this sort of humor appeals to you, an earlier post with some of the old jokes about the Soviet economy is here.)

Gaddy and Ickes conclude: \”Russia does face a growth crisis. This is just dawning on people. It should have been recognized earlier. It wasn’t, because several years of growth produced by the oil-price induced transfer of wealth to Russia from the outside were mistaken for “normal” growth. … Today, many compete in proposing magic solutions to return to growth, proposals that are not feasible economically or politically. Meanwhile, there is one path that is both: the resource track.\” In other words, they suggest that the main economic option for boosting growth that might also be politically acceptable is to open up to foreign investment in the energy

Russia\’s shaky economic prospects matter more broadly than just to the Russian people. Since 2000, Vladimir Putin has mostly been able to cut an implicit deal with the Russian people. On one side, freedom, the rule of law, and democracy are curtailed. On the other side, compared to the 1990s when Russia was in shock from the dissolution of the Soviet Union, Putin offered an economy that was growing, not shrinking, and asserted a more active role for Russia as a player in international relations. There are doubtless a lot of reasons why Russia under Putin has been pushing the boundaries of acceptable international behavior in recent years, before trampling those boundaries altogether in Ukraine. But surely one contributing reason is that when Putin is providing less in terms of domestic economic growth, he feels some pressure to raise the decibel level on Russian assertiveness in international affairs.

Summer 2014 Journal of Economic Perspectives

My job as Managing Editor of the Journal of Economic Perspectives helps to pay the household bills. (Speed-typing these blog posts is a volunteer activity.) All issues of JEP back to the first issue in 1987 are freely available on-line, courtesy of the American Economic Association. I\’ve been running JEP since the first issue in 1987, so I think of this as issue #109. The Summer 2014 issue is how available on-line, although it will take another three weeks or so for paper copies to arrive in the mailboxes of subscribers. I\’m sure I\’ll do some blog posts about specific papers in this issue in the next couple of weeks. But for now, here\’s the compact table of contents, and after that, a longer list of the papers that includes abstracts.

Journal of Economic Perspectives, Summer 2014, Volume 28, Issue 3
Table of Contents

Symposium: Entrepreneurship

\”The Role of Entrepreneurship in US Job Creation and Economic Dynamism,\” by Ryan Decker, John Haltiwanger, Ron Jarmin and Javier Miranda
Full-Text Access

\”Entrepreneurship as Experimentation,\” by William R. Kerr, Ramana Nanda and Matthew Rhodes-Kropf
Full-Text Access

\”Seeking the Roots of Entrepreneurship: Insights from Behavioral Economics,\” by Thomas Astebro, Holger Herz, Ramana Nanda and Roberto A. Weber
Full-Text Access

Symposium: Classic Ideas in Development

\”The Lewis Model: A 60-Year Retrospective,\” by Douglas Gollin
Full-Text Access

\”The Missing \”Missing Middle\”,\” by Chang-Tai Hsieh and Benjamin A. Olken
Full-Text Access

\”Informality and Development,\” by Rafael La Porta and Andrei Shleifer
Full-Text Access

\”Do Poverty Traps Exist? Assessing the Evidence,\” by Aart Kraay and David McKenzie
Full-Text Access

Symposium: Academic Production

\”Page Limits on Economics Articles: Evidence from Two Journals,\” by David Card and Stefano DellaVigna
Full-Text Access

\”What Policies Increase Prosocial Behavior? An Experiment with Referees at the Journal of Public Economics,\” by Raj Chetty, Emmanuel Saez and Laszlo Sandor
Full-Text Access

\”The Effects of an Anti-Grade-Inflation Policy at Wellesley College,\” by Kristin F. Butcher, Patrick J. McEwan and Akila Weerapana
Full-Text Access

\”The Research Productivity of New PhDs in Economics: The Surprisingly High Non-success of the Successful,\” by John P. Conley and Ali Sina Onder
Full-Text Access

Articles and Features

\”The Economics of Fair Trade,\” by Raluca Dragusanu, Daniele Giovannucci and Nathan Nunn
Full-Text Access

\”Evaluating Counterterrorism Spending,\” by John Mueller and Mark G. Stewart
Full-Text Access

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access

_________________________________

Abstracts

Symposium: Entrepreneurship

\”The Role of Entrepreneurship in US Job Creation and Economic Dynamism\”
Ryan Decker, John Haltiwanger, Ron Jarmin and Javier Miranda

An optimal pace of business dynamics—encompassing the processes of entry, exit, expansion, and contraction—would balance the benefits of productivity and economic growth against the costs to firms and workers associated with reallocation of productive resources. It is difficult to prescribe what the optimal pace should be, but evidence accumulating from multiple datasets and methodologies suggests that the rate of business startups and the pace of employment dynamism in the US economy has fallen over recent decades and that this downward trend accelerated after 2000. A critical factor in accounting for the decline in business dynamics is a lower rate of business startups and the related decreasing role of dynamic young businesses in the economy. For example, the share of US employment accounted for by young firms has declined by almost 30 percent over the last 30 years. These trends suggest that incentives for entrepreneurs to start new firms in the United States have diminished over time. We do not identify all the factors underlying these trends in this paper but offer some clues based on the empirical patterns for specific sectors and geographic regions.
Full-Text Access | Supplementary Materials

\”Entrepreneurship as Experimentation\”
William R. Kerr, Ramana Nanda and Matthew Rhodes-Kropf

Entrepreneurship research is on the rise but many questions about the fundamental nature of entrepreneurship still exist. We argue that entrepreneurship is about experimentation; the probabilities of success are low, extremely skewed, and unknowable until an investment is made. At a macro level, experimentation by new firms underlies the Schumpeterian notion of creative destruction. However, at a micro level, investment and continuation decisions are not always made in a competitive Darwinian contest. Instead, a few investors make decisions that are impacted by incentive, agency, and coordination problems, often before a new idea even has a chance to compete in a market. We contend that costs and constraints on the ability to experiment alter the type of organizational form surrounding innovation and influence whe n innovation is more likely to occur. These factors not only govern how much experimentation is undertaken in the economy, but also the trajectory of experimentation, with potentially very deep economic consequences.
Full-Text Access | Supplementary Materials

\”Seeking the Roots of Entrepreneurship: Insights from Behavioral Economics\”
Thomas Astebro, Holger Herz, Ramana Nanda and Roberto A. Weber

There is a growing body of evidence that many entrepreneurs seem to enter and persist in entrepreneurship despite earning low risk-adjusted returns. This has lead to attempts to provide explanations—using both standard economic theory and behavioral economics—for why certain individuals may be attracted to such an apparently unprofitable activity. Drawing on research in behavioral economics, in the sections that follow, we review three sets of possible interpretations for understanding the empirical facts related to the entry into, and persistence in, entrepreneurship. Differences in risk aversion provide a plausible and intuitive interpretation of entrepreneurial activity. In addition, a growing literature has begun to highlight the potential importance of overconfidence in driving entrepreneurial outcomes. Such a mechanism may appear at face value to work like a lower level of risk aversion, but there are clear conceptual differences—in particular, overconfidence likely arises from behavioral biases and misperceptions of probability distributions. Finally, nonpecuniary taste-based factors may be important in motivating both the decisions to enter into and to persist in entrepreneurship.
Full-Text Access | Supplementary Materials

Symposium: Classic Ideas in Development

\”The Lewis Model: A 60-Year Retrospective\”
Douglas Gollin

The Lewis model has remained, for more than half a century, one of the dominant theories of development economics. This paper argues that the power of the model lies in the simplicity of its central insight: that poor countries contain enclaves of economic activity just as rich countries contain enclaves of poverty; and that a proximate explanation for the difference in income per capita across countries is that there are large differences in the relative sizes of their \”modern\” and \”traditional\” sectors. But while the Lewis model contains a powerful and compelling macro narrative, its details have proved somewhat elusive to scholars and students who have followed, and its policy implications are unclear. This paper identifies several key insights of the Lewis model, discusses several different interpretations of the model, and then reviews modern evidence for the central propositions of the model. In closing, we consider the relevance of Lewis for current thinking about development strategies and policies.
Full-Text Access | Supplementary Materials

\”The Missing \”Missing Middle\”\”
Chang-Tai Hsieh and Benjamin A. Olken

Although a large literature seeks to explain the \”missing middle\” of mid-sized firms in developing countries, there is surprisingly little empirical backing for existence of the missing middle. Using microdata on the full distribution of both formal and informal sector manufacturing firms in India, Indonesia, and Mexico, we document three facts. First, while there are a very large number of small firms, there is no \”missing middle\” in the sense of a bimodal distribution: mid-sized firms are missing, but large firms are missing too, and the fraction of firms of a given size is smoothly declining in firm size. Second, we show that the distribution of average products of capital and labor is unimodal, and that large firms, not small firms, have higher average products. This is inconsistent with many models explaining \”the missing middle\” in which small firms with high returns are constrained from expanding. Third, we examine regulatory and tax notches in India, Indonesia, and Mexico of the sort often thought to discourage firm growth and find no economically meaningful bunching of firms near the notch points. We show that existing beliefs about the missing middle are largely due to arbitrary transformations that were made to the data in previous studies.
Full-Text Access | Supplementary Materials

\”Informality and Development\”
Rafael La Porta and Andrei Shleifer

In developing countries, informal firms account for up to half of economic activity. They provide livelihood for billions of people. Yet their role in economic development remains controversial with some viewing informality as pent-up potential and others viewing informality as a parasitic organizational form that hinders economic growth. In this paper, we assess these perspectives. We argue that the evidence is most consistent with dual models, in which informality arises out of poverty and the informal and formal sectors are very different. It seems that informal firms have low productivity and produce low- quality products; and, consequently, they do not pose a threat to the formal firms. Economic growth comes from the formal sector, that is, from firms run by educated entrepreneurs and exhibiting much higher levels of productivity. The expansion of the formal sector leads to the decline of the informal sector in relative and eventually absolute terms. A few informal firms convert to formality, but more generally they disappear because they cannot compete with the much more-productive formal firms.
Full-Text Access | Supplementary Materials

\”Do Poverty Traps Exist? Assessing the Evidence\”
Aart Kraay and David McKenzie

A \”poverty trap\” can be understood as a set of self-reinforcing mechanisms whereby countries start poor and remain poor: poverty begets poverty, so that current poverty is itself a direct cause of poverty in the future. The idea of a poverty trap has this striking implication for policy: much poverty is needless, in the sense that a different equilibrium is possible and one-time policy efforts to break the poverty trap may have lasting effects. But what does the modern evidence suggest about the extent to which poverty traps exist in practice and the underlying mechanisms that may be involved? The main mechanisms we examine include S-shaped savings functions at the country level; \”big-push\” theories of development based on coordination failures; hunger-based traps which rely on physical work capacity rising nonlinearly with foo d intake at low levels; and occupational poverty traps whereby poor individuals who start businesses that are too small will be trapped earning subsistence returns. We conclude that these types of poverty traps are rare and largely limited to remote or otherwise disadvantaged areas. We discuss behavioral poverty traps as a recent area of research, and geographic poverty traps as the most likely form of a trap. The resulting policy prescriptions are quite different from the calls for a big push in aid or an expansion of microfinance. The more-likely poverty traps call for action in less-traditional policy areas such as promoting more migration.
Full-Text Access | Supplementary Materials

Symposium: Academic Production

\”Page Limits on Economics Articles: Evidence from Two Journals\”
David Card and Stefano DellaVigna

Over the past four decades the median length of the papers published in the \”top five\” economic journals has grown by nearly 300 percent. We study the effects of a page limit policy introduced by the American Economic Review (AER) in mid-2008 and subsequently adopted by the Journal of the European Economic Association (JEEA) in 2009. We find that the imposition of a 40-page limit on submissions led to no change in the flow of new papers to the AER. Instead, authors responded by shortening and reformatting their papers. For JEEA, in contrast, we conclude that the page-limit policy led authors of longer papers to submit to other journals. These results imply that the AER has substantial monopoly power over submissions, while JEEA faces a very competitive market. Evidence from both journals, and from citations t o published papers in the top journals, suggests that longer papers are of higher quality than shorter papers, so the loss of longer submissions at JEEA may have led to a drop in quality. Despite a modest impact of the AER\’s policy on the average length of submissions, the policy had little or no effect on the length of final accepted manuscripts.
Full-Text Access | Supplementary Materials

\”What Policies Increase Prosocial Behavior? An Experiment with Referees at the Journal of Public Economics\”
Raj Chetty, Emmanuel Saez and Laszlo Sandor

We evaluate policies to increase prosocial behavior using a field experiment with 1,500 referees at the Journal of Public Economics. We randomly assign referees to four groups: a control group with a six-week deadline to submit a referee report; a group with a four-week deadline; a cash incentive group rewarded with $100 for meeting the four-week deadline; and a social incentive group in which referees were told that their turnaround times would be publicly posted. We obtain four sets of results. First, shorter deadlines reduce the time referees take to submit reports substantially. Second, cash incentives significantly improve speed, especially in the week before the deadline. Cash payments do not crowd out intrinsic motivation: after the cash treatment ends, referees who received cash incentives are no slower than those in the four-week deadline group. Third, social incentives have smaller but significant effects on review times and are especially effective among tenured professors, who are less sensitive to deadlines and cash incentives. Fourth, all the treatments have little or no effect on rates of agreement to review, quality of reports, or review times at other journals. We conclude that small changes in journals\’ policies could substantially expedite peer review at little cost. More generally, price incentives, nudges, and social pressure are effective and complementary methods of increasing prosocial behavior.
Full-Text Access | Supplementary Materials

\”The Effects of an Anti-grade-Inflation Policy at Wellesley College\”
Kristin F. Butcher, Patrick J. McEwan and Akila Weerapana

Average grades in colleges and universities have risen markedly since the 1960s. Critics express concern that grade inflation erodes incentives for students to learn; gives students, employers, and graduate schools poor information on absolute and relative abilities; and reflects the quid pro quo of grades for better student evaluations of professors. This paper evaluates an anti-grade-inflation policy that capped most course averages at a B+. The cap was biding for high-grading departments (in the humanities and social sciences) and was not binding for low-grading departments (in economics and sciences), facilitating a difference-in-differences analysis. Professors complied with the policy by reducing compression at the top of the grade distribution. It had little effect on rece ipt of top honors, but affected receipt of magna cum laude. In departments affected by the cap, the policy expanded racial gaps in grades, reduced enrollments and majors, and lowered student ratings of professors.
Full-Text Access | Supplementary Materials

\”The Research Productivity of New PhDs in Economics: The Surprisingly High Non-success of the Successful\”
John P. Conley and Ali Sina Onder

We study the research productivity of new graduates from North American PhD programs in economics from 1986 to 2000. We find that research productivity drops off very quickly with class rank at all departments, and that the rank of the graduate departments themselves provides a surprisingly poor prediction of future research success. For example, at the top ten departments as a group, the median graduate has fewer than 0.03 American Economic Review (AER)-equivalent publications at year six after graduation, an untenurable record almost anywhere. We also find that PhD graduates of equal percentile rank from certain lower-ranked departments have stronger publication records than their counterparts at higher-ranked departments. In our data, for example, Carnegie Mellon\’ s graduates at the 85th percentile of year-six research productivity outperform 85th percentile graduates of the University of Chicago, the University of Pennsylvania, Stanford, and Berkeley. These results suggest that even the top departments are not doing a very good job of training the great majority of their students to be successful research economists. Hiring committees may find these results helpful when trying to balance class rank and place of graduate in evaluating job candidates, and current graduate students may wish to re-evaluate their academic strategies in light of these findings.
Full-Text Access | Supplementary Materials

Articles and Features

\”The Economics of Fair Trade\”
Raluca Dragusanu, Daniele Giovannucci and Nathan Nunn

Fair Trade is a labeling initiative aimed at improving the lives of the poor in developing countries by offering better terms to producers and helping them to organize. Although Fair Trade-certified products still comprise a small share of the market–for example, Fair Trade-certified coffee exports were 1.8 percent of global coffee exports in 2009–growth has been very rapid over the past decade. Whether Fair Trade can achieve its intended goals has been hotly debated in academic and policy circles. In particular, debates have been waged about whether Fair Trade makes \”economic sense\” and is sustainable in the long run. The aim of this article is to provide a critical overview of the economic theory behind Fair Trade, describing the potential benefits and potential pitfalls. We also provide an assessment of the empirical evidence of the impacts of Fair Trade to date. Because coffee is the largest single product in the Fair Trade market, our discussion here focuses on the specifics of this industry, although we will also point out some important differences with other commodities as they arise.
Full-Text Access | Supplementary Materials

\”Evaluating Counterterrorism Spending\”
John Mueller and Mark G. Stewart

In this article, we present a simple back-of-the-envelope approach for evaluating whether counterterrorism security measures reduce risk sufficiently to justify their costs. The approach uses only four variables: the consequences of a successful attack, the likelihood of a successful attack, the degree to which the security measure reduces risk, and the cost of the security measure. After measuring the cost of a counterterrorism measure, we explore a range of outcomes for the costs of terrorist attacks and a range of possible estimates for how much risk might be reduced by the measure. Then working from this mix of information and assumptions, we can calculate how many terrorist attacks (and of what size) would need to be averted to justify the cost of the counterterrorism measure in narrow cost-benefit terms. To illustrate this appr oach, we first apply it to the overall increases in domestic counterterrorism expenditures that have taken place since the terrorist attacks of September 11, 2001, and alternatively we apply it to just the FBI\’s counterterrorism efforts. We then evaluate evidence on the number and size of terrorist attacks that have actually been averted or might have been averted since 9/11.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading\”
Timothy Taylor
Full-Text Access | Supplementary Materials

Deflation: The Case for Worrying Less

One of the arguments as to why monetary policy should continue to be loose, both for the Federal
Reserve in the United States and for other central banks around the world, is to avoid the risk of a bout of deflation. But is that fear overstated? The Bank of International Settlements offers a discussion \”The costs of deflation: what does the historical record say?\” in its 84th Annual Report published last month. (For those not familiar with BIS, it\’s a Swiss-based international organization that has been around since 1930, and serves as a main forum for consultation and cooperation between central banks.

When looking at historical episodes of deflation, the BIS offers a simple comparison. Look at the five years before and after an episode of deflation started for a range of countries, and see what happened to growth of real GDP during that time. As a starting point, consider pre-World War I episodes of deflation from 1860-1901 in ten countries:  Belgium, Canada, France, Germany, Italy, Japan, Netherlands,  Switzerland, United Kingdom, United States. Some of these countries had multiple episodes of deflation during this time: for example, the U.S. economy had episodes of deflation starting in 1866, 1881, and 1891. When you match up the five years before and after the starting point of an episode of deflation with the path of GDP growth, here\’s what you get. The red line shows the price level (which peaks at time zero, because that\’s how the figure is constructed), and blue line shows the path of GDP relative to that time zero. For this time period, the onset of deflation on average doesn\’t seem to have any effect on economic growth.

How about during the early interwar period, meaning the 1920s and early 1930s? Here, the path of GDP growth across the 10 countries in this sample is definitely faster before the start of price deflations rather than after–but it remains positive. BIS writes: \”In the early interwar period (mainly in the 1920s), the number of somewhat more costly (“bad”) deflations increased: output still rose, but much more slowly – the average rates in the pre- and post-peak periods were 2.3% and 1.2%, respectively. (Perceptions of truly severe deflations during the interwar period are dominated by the exceptional experience of the Great Depression, when prices in the G10 economies fell cumulatively up to roughly 20% and output contracted by about 10%.)\”

Finally, what about more recent deflations from 1990 to 2013? For this time period, the sample now includes episodes of deflation in 13 places: Australia, China;l the euro area, Hong Kong SAR, Japan, New Zealand, Norway, Singapore, South Africa, Sweden,  Switzerland, and the United States (in the third quarter of 2008). The recent episodes of deflation have been much shorter: thus, the red line showing the price level dips slightly, but then starts rising again after about a year. The path of GDP growth looks much the same in the five years before and after the deflation. As BIS writes: \”The deflation episodes during the past two and a half decades have, on average, been much more akin to the
good types experienced during the pre-World War I period than to those of the early interwar period …\”

BIS suggests four lessons that can be taken from this exercise.

\”First, the record is replete with examples of “good”, or at least “benign”, deflations in the sense that they coincided with output either rising along trend or undergoing only a modest and temporary setback. …

\”The second important feature of deflation dynamics revealed by the historical record is the general absence of an inherent deflation spiral risk – only the Great Depression episode featured a deflation spiral in the form of a strong and persistent decline in the price level; the other episodes did not. …   The evidence, especially in recent decades, argues against the notion that deflations lead to vicious deflation spirals. In addition, the fact that wages are less flexible today than they were in the distant past reduces the likelihood of a self-reinforcing downward spiral of wages and prices. …\”

\”Third, it is asset price deflations rather than general deflations that have consistently and significantly harmed macroeconomic performance. Indeed, both the Great Depression in the United States and the Japanese deflation of the 1990s were preceded by a major collapse in equity prices and, especially, property prices. These observations suggest that the chain of causality runs primarily from asset price deflation to real economic downturn, and then to deflation, rather than from general deflation to economic activity. … 

\”Fourth, recent deflation episodes have often gone hand in hand with rising asset prices, credit expansion and strong output performance. Examples include episodes in the 1990s and 2000s in countries as distinct as China and Norway. There is a risk that easy monetary policy in response to good deflations, aiming to bring inflation closer to target, could inadvertently accommodate the build-up of financial imbalances. Such resistance to “good” deflations can, over time, lead to “bad” deflations if the imbalances eventually unwind in a disruptive manner.\”

In short, the grim experience of the 1930s and its combination of deflation and Great Depression looks like a special case. The typical deflation is not accompanied by a steep recession. Nothing here argues that deflation is desirable or worth seeking to encourage. But the greater danger of economic recessio or Depression seems to arise not out of price deflation, but instead when asset prices bubbles overinflate, and then burst.