Journal of Economic Perspectives, Fall 2013, Now On-line

The Fall 2013 issue of the Journal of Economic Perspectives is now available on-line. All articles from this issue back to the first issue in Summer 1987 are freely available on-line, courtesy of the American Economic Association. I\’ll post on some of the articles from this issue in the next week or two. But for now, I just want to let people know what\’s in the issue.  (Full disclosure: I\’ve been the Managing Editor of JEP since the first issue in 1987, so I am predisposed to believe that everything in the issue is well worth reading.) This issue starts with a six-paper symposium on \”The First 100 Years of the Federal Reserve,\” followed by a two-paper exchange on \”Economics and Moral Virtues,\” and then by several individual articles.

Symposium: The First 100 Years of the Federal Reserve

\”A Century of US Central Banking: Goals, Frameworks, Accountability,\” by Ben S. Bernanke

Several key episodes in the 100-year history of the Federal Reserve have been referred to in various contexts with the adjective \”Great\” attached to them: the Great Experiment of the Federal Reserve\’s founding, the Great Depression, the Great Inflation and subsequent disinflation, the Great Moderation, and the recent Great Recession. Here, I\’ll use this sequence of \”Great\” episodes to discuss the evolution over the past 100 years of three key aspects of Federal Reserve policymaking: the goals of policy, the policy framework, and accountability and communication. The changes over time in these three areas provide a useful perspective, I believe, on how the role and functioning of the Federal Reserve have changed since its founding in 1913, as well as some lessons for the present and for the future.
Full-Text Access | Supplementary Materials

\”Central Bank Design,\” by Ricardo Reis

Starting with a blank slate, how could one design the institutions of a central bank for the United States? This paper explores the question of how to design a central bank, drawing on the relevant economic literature and historical experiences while staying free from concerns about how the Fed got to be what it is today or the short-term political constraints it has faced at various times. The goal is to provide an opinionated overview that puts forward the trade-offs associated with different choices and identifies areas where there are clear messages about optimal central bank design.
Full-Text Access | Supplementary Materials

\”The Federal Reserve and Panic Prevention: The Roles of Financial Regulation and Lender of Last Resort,\” by Gary Gorton and Andrew Metrick

This paper surveys the role of the Federal Reserve within the financial regulatory system, with particular attention to the interaction of the Fed\’s role as both a supervisor and a lender-of-last-resort. The institutional design of the Federal Reserve System was aimed at preventing banking panics, primarily due to the permanent presence of the discount window. This new system was successful at preventing a panic in the early 1920s, after which the Fed began to discourage the use of the discount window and intentionally create \”stigma\” for window borrowing — policies that contributed to the panics of the Great Depression. The legislation of the New Deal era centralized Fed power in the Board of Governors, and over the next 75 years the Fed expanded its role as a supervisor of the largest banks. Nevertheless, prior to the recent crisis the Fed had large gaps in its authority as a supervisor and as lender of last resort, with the latter role weakened further by stigma. The Fed was unable to prevent the recent crisis, during which its lender of last resort function expanded significantly. As the Fed begins its second century, there are still great challenges to fulfilling its original intention of panic prevention.
Full-Text Access | Supplementary Materials

\”Shifts in US Federal Reserve Goals and Tactics for Monetary Policy: A Role for Penitence?\” by Julio J. Rotemberg

This paper considers some of the large changes in the Federal Reserve\’s approach to monetary policy. It shows that, in some important cases, critics who were successful in arguing that past Fed approaches were responsible for mistakes that caused harm succeeded in making the Fed averse to these approaches. This can explain why the Fed stopped basing monetary policy on the quality of new bank loans, why it stopped being willing to cause recessions to deal with inflation, and why it was temporarily unwilling to maintain stable interest rates in the period 1979-1982. It can also contribute to explaining why monetary policy was tight during the Great Depression. The paper shows that the evolution of policy was much more gradual and flexible after the Volcker disinflation, when the Fed was not generally deemed to have made an error.
Full-Text Access | Supplementary Materials

\”Does the Federal Reserve Care about the Rest of the World?\” by Barry Eichengreen

Many economists are accustomed to thinking about Federal Reserve policy in terms of the institution\’s \”dual mandate,\” which refers to price stability and high employment, and in which the exchange rate and other international variables matter only insofar as they influence inflation and the output gap — which is to say, not very much. In fact, this conventional view is heavily shaped by the distinctive and peculiar circumstances of the last three decades, when the influence of international considerations on Fed policy has been limited. In fact, the Federal Reserve paid significant attention to international considerations in its first two decades, followed by relative inattention to such factors in the two-plus decades that followed, then back to renewed attention to international aspects of monetary policy in the 1960s, before the recent period of benign neglect of the international dimension. I argue that in the next few decades, international aspects are likely to play a larger role in Federal Reserve policy making than at present.
Full-Text Access | Supplementary Materials

\”An Interview with Paul Volcker,\” by Martin Feldstein

Martin Feldstein interviewed Paul Volcker in Cambridge, Massachusetts, on July 10, 2013, as part of a conference at the National Bureau of Economic Research on \”The First 100 Years of the Federal Reserve: The Policy Record, Lessons Learned, and Prospects for the Future.\” Volcker was Chairman of the Board of Governors of the Federal Reserve System from 1979 through 1987. Before that, he served stints as President of the Federal Reserve Bank of New York from 1975 to 1979, as Deputy Undersecretary for International Affairs in the US Department of the Treasury from 1969 to 1974, as Deputy Undersecretary for Monetary Affairs in the Treasury from 1963 to 1965, and as an economist at the Federal Reserve Bank of New York from 1952 to 1957. He has led and served on a wide array of commissions, including chairing the President\’s Economic Recovery Advisory Board from its inception in 2009 through 2011.
Full-Text Access | Supplementary Materials

Symposium: Economics and Moral Virtues

\”Market Reasoning as Moral Reasoning: Why Economists Should Re-engage with Political Philosophy,\” by Michael J. Sandel

In my book What Money Can\’t Buy: The Moral Limits of Markets (2012), I try to show that market values and market reasoning increasingly reach into spheres of life previously governed by nonmarket norms. I argue that this tendency is troubling; putting a price on every human activity erodes certain moral and civic goods worth caring about. We therefore need a public debate about where markets serve the public good and where they don\’t belong. In this article, I would like to develop a related theme: When it comes to deciding whether this or that good should be allocated by the market or by nonmarket principles, economics is a poor guide. Deciding which social practices should be governed by market mechanisms requires a form of economic reasoning that is bound up with moral reasoning. But mainstream economic thinking currently asserts its independence from the contested terrain of moral and political philosophy. If economics is to help us decide where markets serve the public good and where they don\’t belong, it should relinquish the claim to be a value-neutral science and reconnect with its origins in moral and political philosophy.
Full-Text Access | Supplementary Materials

\”Reclaiming Virtue Ethics for Economics,\” by Luigino Bruni and Robert Sugden

Virtue ethics is an important strand of moral philosophy, and a significant body of philosophical work in virtue ethics is associated with a radical critique of the market economy and of economics. Expressed crudely, the charge sheet is this: The market depends on instrumental rationality and extrinsic motivation; market interactions therefore fail to respect the internal value of human practices and the intrinsic motivations of human actors; by using market exchange as its central model, economics normalizes extrinsic motivation, not only in markets but also in social life more generally; therefore economics is complicit in an assault on virtue and on human flourishing. We will argue that this critique is flawed, both as a description of how markets actually work and as a representation of how classical and neoclassical economists have understood the market. We show how the market and economics can be defended against the critique from virtue ethics, and crucially, this defense is constructed using the language and logic of virtue ethics. Using the methods of virtue ethics and with reference to the writings of some major economists, we propose an understanding of the purpose (telos) of markets as cooperation for mutual benefit, and identify traits that thereby count as virtues for market participants. We conclude that the market need not be seen as a virtue-free zone.
Full-Text Access | Supplementary Materials

Articles

\”Gifts of Mars: Warfare and Europe\’s Early Rise to Riches,\” Nico Voigtländer and Hans-Joachim Voth

Western Europe surged ahead of the rest of the world long before technological growth became rapid. Europe in 1500 already had incomes twice as high on a per capita basis as Africa, and one-third greater than most of Asia. In this essay, we explain how Europe\’s tumultuous politics and deadly penchant for warfare translated into a sustained advantage in per capita incomes. We argue that Europe\’s rise to riches was driven by the nature of its politics after 1350 — it was a highly fragmented continent characterized by constant warfare and major religious strife. No other continent in recorded history fought so frequently, for such long periods, killing such a high proportion of its population. When it comes to destroying human life, the atomic bomb and machine guns may be highly efficient, but nothing rivaled the impact of early modern Europe\’s armies spreading hunger and disease. War therefore helped Europe\’s precocious rise to riches because the survivors had more land per head available for cultivation. Our interpretation involves a feedback loop from higher incomes to more war and higher land-labor ratios, a loop set in motion by the Black Death in the middle of the 14th century.
Full-Text Access | Supplementary Materials

\”The Economics of Slums in the Developing World,\” by Benjamin Marx, Thomas Stoker and Tavneet Suri

The global expansion of urban slums poses questions for economic research as well as problems for policymakers. We provide evidence that the type of poverty observed in contemporary slums of the developing world is characteristic of that described in the literature on poverty traps. We document how human capital threshold effects, investment inertia, and a \”policy trap\” may prevent slum dwellers from seizing economic opportunities offered by geographic proximity to the city. We test the assumptions of another theory — that slums are a just transitory phenomenon characteristic of fastgrowing economies — by examining the relationship between economic growth, urban growth, and slum growth in the developing world, and whether standards of living of slum dwellers are improving over time, both within slums and across generations. Finally, we discuss why standard policy approaches have often failed to mitigate the expansion of slums in the developing world. Our aim is to inform public debate on the essential issues posed by slums in the developing world.
Full-Text Access | Supplementary Materials

\”Recommendations for Further Reading,\” by Timothy Taylor
Full-Text Access | Supplementary Materials

The Safety of Bioengineered Crops

When I first had to write about genetically modified crops, there was no evidence upon which to draw. It was 1986, and I was working as an editorial writer for the San Jose Mercury News. The first genetically modified plant, a variety of tobacco, had been created in a lab in 1983. A local company called Advanced Genetic Sciences was proposing the first field test of a genetically modified organism. Their plan was to spray some strawberries and potatoes with a bacteria often known as\” ice-minus,\” because it helped prevent frost damage to crops.

The step was quite controversial. For opponents, no buffer zone and no set of precautions could justify spraying this bacteria: indeed, the night before the trial was to take place, protesters broke into the fields and pulled up many of the plants. But I wrote some editorials supporting the trial, in large part because the ice-minus bacteria were already fairly widespread in nature. It was perfectly legal to culture the natural ice-minus bacteria and spray it; this trial just involved spraying genetically identical ice-minus bacteria that had been created in a laboratory. The trial worked fine: that is, the crops were more frost-resistant, and there were no observable negative effects to the environment.

By 1994, the first genetically modified plant was commercially sold when the FlavrSavrTM tomato hit the market. But the introduction of genetically engineered crops in the field is usually dated to 1996, when field crops that were genetically engineered to be pest-resistant and herbicide-tolerant were introduced. Now, after the technology has been in widespread use for 17 years, the studies are piling up. In a forthcoming article posted online at Critical Reviews in Biotechnology, Alessandro Nicolia, Alberto Manzo, Fabio Veronesi, and Daniele Rosellini provide \”An overview of the last 10 years of genetically engineered crop safety research.\” They built a comprehensive database of the research literature from 2002 through 2012, consisting of 1,783 articles on different aspects of these first-generation genetically engineered crops. Here is the bottom line of their survey, from the abstract:

\”The technology to produce genetically engineered (GE) plants is celebrating its 30th anniversary and one of the major achievements has been the development of GE crops. The safety of GE crops is crucial for their adoption and has been the object of intense research work often ignored in the public debate. We have reviewed the scientific literature on GE crop safety during the last 10 years, built a classified and manageable list of scientific papers, and analyzed the distribution and composition of the published literature. We selected original research papers, reviews, relevant opinions and reports addressing all the major issues that emergedin the debate on GE crops, trying to catch the scientific consensus that has matured since GE plants became widely cultivated worldwide. The scientific research conducted so far has not detected any significant hazards directly connected with the use of GE crops; however, the debate is still intense.\” 

More specifically, they look at how genetically engineered crops have interacted with biodiversity; at risks of \”gene flow\” to other crops, wild plants, or through the soil; at health effects for animals that eat feed from genetically engineered crops; and potential health effects for human consumers. As they discuss, some of the evidence raises warning signs that are worth more investigation,  and in cases certain genetically engineered crops are no longer grown, or not grown in certain areas, because of these concerns. But as they write in the conclusion: \”We have reviewed the scientific literature on GE [genetically engineered] crop safety for the last 10 years that catches the scientific
consensus matured since GE plants became widely cultivated worldwide, and we can conclude that the scientific research conducted so far has not detected any significant hazard directly connected with the use of GM [genetically modified] crops.\” In short, if you like to believe that you follow the scientific evidence, you should believe that the first generation of pest-resistant and herbicide-tolerate genetically engineered crops has been a great success, substantially increasing crop yields and reducing the need for heavy chemical use.

I support all sorts of rules and regulations and follow-up studies to make sure that genetically engineered crops continue to be safe for the environment and for consumers. After all, the first-generation genetically engineered field crops were all about pest resistance and herbicide-tolerance, and as new types of genetic engineering are proposed, they should be scrutinized. But for me, the purpose of these regulations is to create a clear pathway so that the technology can be more widely used in a safe way, not to create a set of paperwork hurdles to block the future use of the technology.

Farmers have been breeding plants and animals for desired characteristics for centuries. Genetic engineering holds the possibility of speeding up that process of agricultural innovation, so that agriculture can better meet a variety of human needs. Most obviously, genetically engineered crops are likely to be important as world population expands and world incomes continue to rise (so that meat consumption rises as well). In addition, remember that plants serve functions other than calorie consumption. Plant that were more effective at fixing carbon in place might be a useful tool in limiting the rise of carbon in the atmosphere. Genetically modified plants are one of the possible paths to making plant-based ethanol economically viable. Plants that can thrive with less water or fewer chemicals can be hugely helpful to the environment, and to the health of farmworkers around the world. The opportunity cost of slowing the progress of agricultural biotechnology is potentially very high.

Unavoidable Realities of Insurance: Health Care Edition

Insurance markets are unavoidably unpopular, because of a basic fact and an unpalatable implication.

Here\’s the basic fact about all insurance markets: What gets paid out must equal what gets paid in. Or to put it another way, what is paid in by the average person in premiums must be pretty much equal (with some minor differences noted below) to what is received by the average person in benefits.

And here\’s the unpalatable implication: Some people will receive much more from insurance payments than they pay in. They might buy life insurance and die young, or buy car insurance and suffer a severe accident, or buy health insurance and face a costly period of hospitalization. But in turn, because of the basic fact that the average person can\’t receive more more in benefits than the average person pays in premiums, there will inevitably be many people–probably a majority of insurance buyers–who will receive much less in insurance benefits than they pay in.

For example, I\’m a direct purchaser of insurance for my home, car , and (term) life, as well as an indirect purchaser of health and dental insurance through my employer. The best possible outcome for me is that I would pay premiums year after year, all my adult life, and never receive more than minimal insurance benefits. After all, receiving significant payments from an insurance company would mean that my family had experienced damage to our home, or a car accident, or sickness, injury, or death.

Any market where the good outcome experienced by a majority of buyers is to make payments all your life in exchange for little or no benefit is going to be continually unpopular.

A lot of energy goes into trying to ignore or deny either the basic fact about insurance markets or the unpalatable implication. The expansions of what health insurance policies must cover that are required by the Affordable Care and Patient Protection Act offer a vivid example. The rules expanding what a typical health insurance plan must cover mean that the average person will need to pay higher premiums, because the benefits being paid out of the health insurance system need to equal what is paid in. Moreover, some people are going to draw on these additional coverage provisions much more than others, so many of those who are unlikely to draw upon such policies will find an even bigger gap between what they are paying in insurance and the insurance benefits they personally receive. Indeed, many of these all-pay, little-benefit households–and many people have a pretty good idea whether they fall into this category–are being required under the new law to pay for others to receive a level of health insurance coverage that they had not previously chosen to receive for themselves.

There will always be a political dynamic to promise that the majority should receive more for their insurance, but that no one should need to pay more on their premiums. For example, the original Medicare legislation back in 1966 required that premiums paid by the elderly should cover 50% of the costs of Part B. But Medicare spending went up, premiums didn\’t, and in 1997 a law needed to be passed to assure that premiums paid by the elderly would cover 25% of the costs of Part B.

Another way to quarrel with the basic fact about insurance is to point out that private insurance companies spend money on administrative costs and on profits. In addition, insurance companies earn revenue from investing reserves. One can argue that insurance companies could be more efficient, or their profits could be more regulated, and that in these ways benefits might be increased somewhat without paying more in premiums. I\’m all for encouraging greater efficiency, and I think the government has been slow in pushing the private sector to coordinate on formats for electronic medical records and billing. But the National Association of Insurance Commissioners reports that in 2012, insurance companies spent 85.7% of the premiums they received, while 11.8% was paid in administrative expenses, and the other 2.7% was profit margin. In other words, the overwhelming amount of money paid into health insurance in premiums is indeed paid out to health care providers. The existence of private-sector insurance companies may bend the basic fact that what is paid out in insurance benefits must equal what is paid in, but it does not sever the connection.

Just to be clear, neither the fundamental fact about insurance nor the unpalatable implication goes away with a government-run or a single-payer insurance system. Whether it\’s Medicare or Medicaid or private sector health insurance or government-run exchanges, it still  holds true that benefits must be paid for. It\’s tempting, of course, to assert that the U.S. government could run a nonprofit health care system in a efficient and cost-effective manner that reduced administrative costs, but even if this assert is true, as noted a moment ago the additional revenue from greater administrative efficiency is a modest share of total health care spending. Also, given events in recent weeks, that assertion that the U.S. government could run an efficient and cost-effective health insurance system must be open to considerable dispute. There\’s a reason why, in a country as large and diverse as the United States, insurance regulation has typically been done at the state level rather than the federal level. Even with a government-run or single-payer system, it still holds true that because some people will experience very high costs, the typical person will pay into the health insurance system more than they ever get out–and should be perversely grateful to be lucky in this way.

I have for years favored having the government spend more to subsidize health insurance for those 50 million or so Americans who have lacked it. My preference has been to fund these subsidies by placing limits on the tax exemption given for employee compensation paid in the form of employer-paid health insurance. (For some estimates from the Congressional Budget Office of how such limits could raise $46-$101 billion per year in extra tax revenue when phased in 10 years from now, see Option 15 in Chapter Five of this recent CBO report.)

However, the Affordable Care and Patient Protection Act goes well beyond providing assistance to those without insurance. It has been promoted with set of promises that everyone in both the individual and employer-provided insurance market can have the same or more insurance coverage while everyone pays the same or lower private-sector insurance premiums–and while future increases in government health care spending are lower than than they otherwise would have been. In seeking to carry out these promises in defiance of the basic economics of insurance markets, the law will necessarily disrupt the health care arrangements for a sizable share of the 200 million or so Americans who already have private health insurance.

Thoughts about the Future of Print

My work life is mostly in the world of print. I run an academic economics journal. Yes, I give some lectures now and then, but I write this blog, and I have written an introductory economics textbook and a book for the general public on understanding economics. So I\’m always on the lookout for articles on the future of media in the digital age, and here are three that recently caught my eye.

Frank Rose writes on \”The Future of Media: First Print, Then Cable,\” in the Fourth Quarter 2013 issue of the Milken Institute Review. He sets the stage with a review of how

\”Contrary to popular belief, the 20th century did not end at the same time for everyone. For those in the music business, it sputtered to a halt with the close of 1999 – the year recorded-music sales in the United States reached their all-time high of $14 billion; by 2012, sales had dropped to half that. For newspapers, the end of the century arrived a year later, when U.S. ad revenues hit nearly $49 billion; now they’re down to a mere $22 billion. Hollywood was able to stave off the century’s demise until 2002, when box-office admissions in the United States and Canada peaked at 1.6 billion; last year they were down 15 percent from that (which is not bad, considering). Magazines, for their part, seemed to lead a charmed life – through 2006, anyway, when ad pages topped out at 253,000; since then, they have dropped by a third. As for television, the major broadcast networks have been on a slow, inexorable slide into the future for decades. In 1980, according to Nielsen, well over half the TV sets in America were tuned to ABC, NBC or CBS during prime time. By 2012, even with the addition of the Fox network, the portion tuned to a broadcast network was below one quarter.\”

Rose also describes what he sees as three key expectations that are emerging for media in the digital age. 

\”Over the past few years, as more and more people have become comfortable with smartphones, tablets and services that let you buy or stream stuff online, three key expectations have emerged. Any one of these would be challenging to a 20th-century media company. Taken together, they require a total rethinking of the enterprise. First, people are coming to expect an infinitude of choice. They want news and entertainment on their own terms. They see no reason they shouldn’t be able to watch movies or TV shows or listen to music whenever they want, wherever they happen to be, on whatever device they have with them – and with a minimum of advertising, thank you. …  Meanwhile, a second set of expectations has been developing. Humans have always
wanted to somehow inhabit the stories that move them – to be spellbound, entranced, transported to another realm. … Historically we’ve managed to plunge in with whatever technology is at hand, from books to film to TV. But digital raises the possibility of immersing ourselves even further, and in entirely new ways. … But there is a third, closely related expectation as well: more and more, audiences expect some kind of active involvement. No longer passive consumers, they now want to be participants – critics, providers and (through the act of sharing on social media) distributors of information.\”

But oddly enough, you may have noticed that Rose\’s list of media industries that have been overwhelmed by digital forces doesn\’t include plain old books. Evan Hughes offers an argument for why plain old books have held up so well in the digital era in \”Books Don\’t Want to Be Free,\” which appears in the October 21 issue of The New Republic.

\”If you’re in the business of selling journalism, moving images, or music, you have seen your work stripped of value by the digital revolution. Translate anything into ones and zeroes, and it gets easier to steal and harder to sell at a sustainable price. Yet people remain willing to fork over a decent sum for books, whether in print or in electronic form. “I can buy songs for 99 cents, I can read most newspapers for free, I can rent a $100 million movie tonight for $2.99,” Russ Grandinetti, Amazon’s vice president of Kindle content, told me in January. “Paying $9.99 for a best-selling book—paying $10 for bits?—is in many respects a very strong accomplishment for the business.”

Hughes emphasizes two main reasons why books have held up so well in the digital era. First, books are they are less vulnerable to disaggregation: you can pull apart and sell individual songs on an album of popular music much more easily than you can pull apart chapters of most books. Second, books are already so low-tech that they are in an odd way less affected by digital technology.

\”Part of the problem for journalism, music, and television is that they are vulnerable to disaggregation. Their products are made up of songs and articles and shows that have long been consumed in those individual units. Once the Internet made it possible to ignore the unwanted material, overall value slipped. Easy access to favorite singles opened those up to impulse buys—but also made purchasing the whole record feel almost indulgent, a splurge for audiophiles and diehard fans. Now the TV viewer wants “Breaking Bad” without bills from Comcast. The ability to score individual articles by the clicks and ad dollars they reap has exposed vital but embattled forms like international reporting and arts criticism to further pruning.

Hollywood has fallen victim not to disaggregation but to its opposite: Netflix and the like have bundled films into affordable smorgasbords, undermining the perceived value of each movie. (More recently, Pandora and Spotify have hit music from this second flank.) The dark side of digitization—piracy— is obviously another force at play. Sites like BitTorrent, following in Napster’s wake, have helped convince a generation that they shouldn’t have to pay for the goods. In publishing, meanwhile, the deal with the customer has always been dead simple, and the advent of digital has not changed it: You pay the asking price, and we give you the whole thing. It would make little sense to break novels or biographies into pieces, and they’re not dependent on the advertising that has kept journalism and television artificially inexpensive and that deceives the consumer into thinking the content is inexpensive to make …

Some print stalwarts find e-readers a step down, but the fundamental experience of immersing yourself in a text is not bound up in any particular medium or venue. Reading never depended too much on sensual verisimilitude, only a mental leap from the words to the ideas they represent. The book is so low-tech, it’s hard for technology to degrade it.

Finally, what a lot of digital media comes down to is the issue of how to get and keep the attention of possible consumers. Hal Varian emphasizes this point in \”The Economics of the Newspaper Business.\” a talk he gave at an Italian journalism awards ceremony in September. The talk is an absolutely elegant example of how to say something real and interesting in a very short time window. Varian argues: 

\”The fundamental challenge facing newspapers is to increase the time people spend on their content. … Subscribers to physical newspapers spend about 25 minutes a day reading them. The typical time spent on an online news site in the US and UK is about 2-4 minutes, roughly one-eighth as much. Interestingly, newspapers in the US make about one-eighth of their total ad revenue from online ads. If readers spent as much time reading the online content as the offline content, ad revenue from online content would be much closer to the offline revenue.\”

My job is to be Managing Editor of the Journal of Economic Perspectives, which is one of seven journals published by the American Economic Association. A couple of years ago, the AEA made a decision that all issues of the journal, from the first issue back in 1987 up to the most recent, are freely available online. Both individual articles and entire issues can be downloaded to readers like Kindle or Nook, too. Of course, making a publication free only works because the AEA has income from other sources, like dues from its members, subscriptions from libraries, and payments for the EconLit service it runs that indexes the articles in academic economics journals. This blog, like many blogs, is created from curiosity, obsessiveness, and attention-seeking, but is otherwise done without any specific payment.

Goldilocks Fiscal Policy: Just About Right

There\’s an old saying that the problem with trying to be middle-of-the-road is that you get hit by traffic going both directions. Nonetheless, standing in the middle of those who argue vehemently that the budget deficits should be much smaller (because the recession ended in June 2009 and the deficits didn\’t help much in stimulating the economy anyway) and those who argue that the deficits should be much larger (because unemployment remains stubbornly high at 7.3% and a bigger deficit would help to reduce that rate), I actually think the budget deficit is about where it should be.

The Monthly Budget Review that Congressional Budget Office published on November 7 has an overview of the budget picture through Fiscal Year 2013, which ended on September 30. In the figure, federal spending and revenue are shown in the way I prefer–as a percentage of GDP.

The story of fiscal policy since 1980 is encapsulated in these lines. For example, the Reagan tax cuts and spending increases that led to such large budget deficits in the 1980s are clear.

In the 1990s, there\’s a slow decline in federal spending (as a share of GDP) in the 1990s as defense spending fell and a growing economy reduced the need for income support programs. On the revenue side, Bill Clinton and the Democrats passed a tax increase in 1993 with no Republican votes at all, which together with the surge in tax revenues from the dot-com boom of the 1990s raised the federal tax take.

In the 2000s, tax revenue first falls with the end of the dot-com boom and the Bush tax cuts, and then rises with the construction and housing price bubble in mid-decade. Spending rises a bit in the early 2000s as a recession increases the need for government income support programs and the aftermath of 9/11 pushes up defense-related spending again.

When the Great Recession hits in late 2007, spending turns sharply up and tax revenue turns sharply down. In part, this change is built into government programs. When the economy slumps and income falls, tax revenues will drop automatically, too. Similarly, when the economy slumps more people become eligible for government support programs. These \”automatic stabilizers,\” as they are called, will increase the deficit, and then additional tax cuts or spending increases will raise it further. (Here\’s a post about a CBO report from last March laying out what happened with automatic stabilizers during the recession.)

Naturally, when the recession ends and economic growth resumes–even sluggish and halting growth–the automatic stabilizers reverse themselves and the temporary fiscal stimulus packages come to an end. Tax revenues rise; spending falls. In fiscal year 2013, government spending was 20.8% of GDP, which was slightly above the average over the last 40-year of 20.4%. Federal government tax revenues were 16.7% in 2013, which was below the 17.9% from back in 2007 before the Great Recession and also below the 17.4% average over the last 40 years.

The budget deficit for 2013 was 4.1% of GDP. This is considerably lower than the gargantuan deficits from fiscal years 2009 to 2012 (which were 9.8, 8.8, 8.4 and 6.8% of GDP, respectively). But after four years of gargantuan budget deficits, this lower budget deficit for 2013 hardly qualifies as austerity. As I have pointed out before, the total rise in federal debt in recent years was, when expressed relative to GDP at the time, about twice the size of what happened with the Reagan deficits of the mid-1980s, substantially larger than the federal debt increase incurred during the Great Depression of the 1930s, and about two-thirds the size of the debt increase incurred to fight World War II.

The 2013 government budget deficit of 4.1% of GDP is smaller than in the immediate past, but given the recession ended more than four years ago in June 2009, and unemployment has gradually worked down from 10% in October 2009 to 7.3% at present, it seems to me appropriate that the deficit should be declining, too. Moreover, by historical standards a budget deficit of 4.1% of GDP isn\’t tiny. If one goes back before the Great Recession, it\’s still the the biggest annual deficit since the 1991-92 recession year or the Reagan deficits of the mid-1980s.

The long-term projections for  higher budget deficits, driven by an aging population and continually rising health care costs, are troubling to me. But as for right now, the overall budget deficit seems just about right to me.  

Africa\’s Growth: Not Just Minerals

The lowest-income region of the world, sub-Saharan Africa, has been having a surge of economic growth during the last decade or so. The IMF reviews the current situation in one of its Regional Economic Outlook reports, \”Sub-Saharan Africa: Keeping the Pace,\” published in October.

The red line on the figure shows the rate of real GDP growth in the region sub-Saharan African since the late 1990s, measured on the left-hand axis. The green line, measured on the right-hand axis, shows the growth rate in sub-Saharan Africa compared with the 188 IMF member countries. Economic growth in the sub-Saharan region as a whole in the last decade has consistently been above the median country, and in recent years has been in the 60th to 70th percentile.

But this good news comes wrapped in some concerns. It\’s also been a time of high prices for oil and many minerals. Is Africa\’s economic growth really just about countries getting a better price for their minerals? Or is something deeper going on? The second chapter of the IMF report addresses this question head-on (as usual, footnotes and references to figure or tables are omitted for readability).

\”Sub-Saharan Africa has grown strongly since the mid-1990s. There is a common perception that this growth has been the result of relatively high commodity prices, particularly for natural resources such as oil and minerals, generating both higher commodity revenues and attracting substantial new investment. Although growth in some countries in the region is heavily dependent on the export of natural resources, many nonresource-intensive LICs [low-income countries] have also experienced rapid growth. In fact, 8 of the 12 fastest-growing economies in sub-Saharan Africa since 1995 were LICs considered nonresource-rich during this period, and as a group they have grown slightly faster than the oil exporters …\” 

The chapter then focuses on telling the story of six of these nonresource-intensive low-income countries. \”All sample countries—Burkina Faso, Ethiopia, Mozambique, Rwanda, Tanzania, and Uganda—have achieved strong and sustained growth since the mid-1990s, despite not having exploited natural resources on a large scale during this period. The countries were chosen on the basis of having experienced real output growth greater than 5 percent on average during the period 1995–2010, and real per capita GDP growth of more than 3 percent over the same period.\”

At the risk of oversimplifying the many details that differ across these six countries, I\’d say that two main themes run through the discussion of their success. First, they enacted economic policies that  made them attractive for investment, and capital arrived in the form of domestic investment, foreign direct investment, debt relief, and aid. Second, they saw a boost in their agricultural sector, which was a substantial share of the economy and a majority of the workforce and of poverty in all of these countries.

On the theme of increased investment levels, the IMF writes:

\”A common theme emerging from this analysis is that macroeconomic stabilization and decisive structural reforms were crucial in laying the foundations for sustained growth. Several of the countries emerged from armed conflict or had been characterized earlier by African socialism and state-led development strategies stemming from post-independence periods. Therefore, countries had to address major macroeconomic imbalances … Moreover, the six countries all saw higher aid flows, higher FDI [foreing direct investment], significant debt relief, and higher public capital expenditure than other nonresource-intensive LICs and fragile states. Although the nexus between investment and economic growth is not fully understood, this sustained
high investment is likely to have supported high growth in the sample countries.\”

Along with running more sensible macroeconomic policies and reducing the role of price controls, these countries also saw improved governance overall. \”The sample countries registered improvements in governance relative to the comparator group in four out of the five dimensions considered in the World Bank’s Worldwide Governance Indicators: control of corruption, government effectiveness, political stability and absence of violence, and regulatory quality.\”

These economic changes took effect in some important part through the agricultural sector. As economies develop, agriculture will typically play a smaller role in the economy, which as this graph illustrates has been happening for these six countries.

But the share of GDP that is in agriculture may understate the importance of agriculture in economic development, because and even larger share of low-income workers are in the agricultural sector. \”Most of the labor force in the sample countries remains concentrated in the agricultural sector, and more significant productivity increases will be crucial to promoting inclusive growth and lifting more people out of poverty. It is estimated that the bulk of the sample countries’ active labor force is engaged in the agricultural sector—about 80–81 percent in Mozambique and Burkina Faso, 71 percent in Uganda, and 65 percent in Tanzania … Moreover, most of the extremely poor rely on subsistence farming for their livelihoods.\”

A final intriguing point in the report is that these six countries are quite different from each other in various ways. Burkina Faso uses the CFA franc as its currency and has a large cotton sector. Ethiopia has seen growth in industries like tourism, air travel, and cut flowers. Mozambique has large projects underway to produce and transmit electricity and gas. Rwanda has seen growth in coffee and tourism. Tanzania is an example of broad-based reforms across all sectors, and Uganda has been diversifying its exports. As the report argues:

\”Initial conditions are not so important. The majority of the countries in the sample are landlocked, and many were just emerging from conflict situations in the early 1990s. Postconflict and other fragile states can move onto rapid growth paths once a clear vision is formulated and consistently implemented.\”

In other words, a number of countries in sub-Saharan Africa are reaching a point where sensible economic policy–no miracles needed!–can help them move to more rapid economic growth. For some earlier posts about Africa\’s growth, see:

Why U.S. Economic Growth is Getting Harder

The formula for economic growth is clear enough: more workers, improved human capital as measured in education and experience, high levels of capital investment, and ongoing adoption of new technologies, all mixed together in a market-oriented economic environment that provides incentives for work and innovation. The problem for the U.S. economy is that all of these factors are looking shaky for the decade or two ahead. Brink Lindsey lays out the issues in \”Why Growth Is Getting Harder,\” written as Policy Analysis #737 (October 8, 2013) for the Cato Institute.

For example, when it comes to the quantity of labor in the U.S. economy, it\’s become well-known that that the share of adults who have jobs or are looking for them–the \”labor force participation rate\”–has been dropping for most of the 21st century and dropping faster in the Great Recession and its aftermath. Lindsey offers an interesting graph of per capital hours worked. This measure takes into account changes in the share of adults working, as well as changes in the work week. Looking at this graph, and thinking about the ongoing retirement of the Baby Boom generation, it\’s hard to think that a surge in hours worked will be a main driver for future US economic growth.

When it comes to capital investment, it\’s been true for decades that the US is a low saving and low investment economy by the world standards. Maybe at some point in the future these trends will turn, so that Americans save more and the U.S. government borrows substantially less, both leading to more funds for domestic investment. But it\’s hard to look at this long-run pattern and think that a surge of saving and physical capital investment will be a main driver for long-run U.S. growth.

Along with physical capital, it\’s at least as important to invest in human capital, a broad term that include both education levels and skills gained through job experience. U.S. adults as a group tend lag behind the average of other high-income countries when it comes to literacy, numeracy, and problem solving. This graph shows that while the U.S. had a surge in educational attainment between about 1930 and 1970, the change since then has been much more modest. Meanwhile, the rest of the world isn\’t standing still in expanding educational attainment for their populations.  

So that leaves us with technology as a potential source of future economic growth. Measurements of productivity are quite volatile, depending on the business cycle, but the smoothed trend line shows that there is no reason to think that productivity growth is trending up. As just a little extra salt in the wound, Lindsey includes a line (measured on the left-hand axis) showing that in fact the U.S. workforce has experienced a rising share of workers in science, technology, engineering, and mathematics. Maybe this has helped to prevent the rate of productivity growth from falling further, but that\’s the best case one can make. For some perspectives on the controversy about whether a surge of US productivity growth is likely in the future, see here and here.

U.S. economy continues to have some enormous strengths and opportunities, of course. For example, the U.S. is uniquely positioned to be a crossroads of the evolving global economy, with the communications and transportation infrastructure, legal and financial industries, and personal and business connections around the world. The possibility that the U.S. could have less expensive domestically produced energy supplies could give the economy a genuine boost. Finding ways for typical jobs in factories, offices, schools, and hospitals to work more effectively with information technology holds the possibility of enormous gains. But at their core, all of these kinds of changes go back to the basic building blocks of workers with an increasing level of skill, working with a rising stock of physical capital investment, finding and adapting new technologies.

The bottom line here is not that the U.S. economy goes into  a tailspin, but rather that slow growth accumulates over time. Lindsey writes:

How important is such a [growth] slowdown? Thanks to the power of compound interest, relatively small differences in the growth rate add up to huge differences in living standards over time. Using the so-called “rule of 70,” you can figure out roughly how long it takes for output per person to double by dividing 70 by the growth rate. Thus, a 2 percent annual growth rate doubles incomes in 35 years, whereas with a 1.5 percent annual growth rate it takes 47 years for incomes to double. Consider the case of a 22-year-old college graduate, just starting in the workplace now. If the long-term average growth rate falls from 2 percent to 1.5 percent, the economy at the time our new college grad retires at age 65 will be almost 20 percent less wealthy than it would have been if the growth rate had remained on trend.

As I sometimes like to say, whatever your political goal, whether it is higher spending on social programs or a lower tax burden for citizens, it\’s easier to achieve that goal when the economy is growing more rapidly.

Reducing the Federal Prison Population

The number of Americans in federal prison has risen more than eight-fold in the last three decades, from 25,000 in 1980 to 213,000 today. While it seems plausible that some the increase has been useful and justified as part of an effort to reduce crime, it also seems plausible that the increase has probably gone too far. Julie Samuels, Nancy La Vigne and Samuel Taxy of the Urban Institute lay out this view in \”Stemming the Tide: Strategies to Reduce the Growth and Cut the Cost of the Federal Prison System.\”  Basically, their plan is to reduce prison time for nonviolent drug offenses.

What factors are driving the higher federal prison populations?

\”The short explanation for the rapid prison population growth is that more people are sentenced to prison and for longer terms. In fiscal year (FY) 2011, more than 90 percent of convicted federal offenders were sentenced to prison, while about 10 percent got probation. By comparison, in 1986, only 50 percent received a prison sentence, over 37 percent received probation, and most of the remainder received a fine. Though the number of inmates sentenced for immigration crimes has also risen, long drug sentences are the main driver of the population’s unsustainable growth. In 2011, drug trafficking sentences averaged 74 months, though they have been falling since 2008. Mandatory minimums have kept even nonviolent drug offenders behind bars for a long time. The average federal prison sentence in 2011 was 52 months, generally higher than prison sentences at the state level for similar crime types. This difference is magnified by the fact that, at the federal level, all offenders must serve at least 87 percent of their sentences, while, at the state level, most serve a lower percentage and nonviolent offenders often serve less than 50 percent of their time. … Before the Sentencing Reform Act of 1984 and mandatory minimums for drug offenses, a quarter of all federal drug offenders were fined or sentenced to probation, not prison. Today 95 percent are sentenced to a term of imprisonment. The average time served before 1984 was 38.5 months, almost half of what it is now. 

Samuel, La Vigne, and Taxy emphasize that the current system has a fair amount of discretion in how it handles drug offenders. For example, there is discretion over whether drug offenders will be prosecuted in the federal system or in a state system, where offenders are less likely to end up in prison. Prosecutors might choose to bring lesser charges, or judges might find various reasons to impose lower penalties. The Obama administration is putting this approach into effect, as Samuels, La Vigne and Taxy explain:

\”Until recently, some lower-level, nonviolent drug offenders were subject to mandatory minimum penalties regardless of their role in the organization. As mandatory minimum penalties were originally intended to target “serious” and “major” offenders, these terms of  imprisonment may be unnecessarily lengthy with no added benefit to public safety. Attorney  General Holder’s 2013 Department Policy Memo directs prosecutors to refrain from charging lower-level, nonviolent drug offenders with drug quantities that would trigger a mandatory minimum sentence.

But I confess that this  emphasis on discretion makes me a little queasy, because in a big country, discretion inevitably means that those who committed the same crime will end up being treated very differently by the criminal justice system, depending on accidents of geography and jurisdiction and which prosecutor and judge they face. Greater use of discretion now is probably a lawsuit for unequal treatment waiting to happen in the future. In addition, relying on the discretion of law enforcement is a way for legislators to duck responsibility.

Thus, I would favor changing the sentencing laws directly. For example, among the policies they discuss is one that would reduce mandatory minimum sentences fro some drug offenders: say, cutting certain 5-year minimums to 2, 10-year minimums to 5, and 20-year minimums to 10. They estimate that over time, this change would \”would have a monumental effect on the prison system.\”

There are also a range of options other than sentencing an offender to prison, like halfway houses and home confinement. However, it seems that increased use of probation is the only way to save money. Samuels, La Vigne, and Taxy explain:

\”The average cost of housing an inmate in a BOP facility in FY 2012 was over $29,000 annually. … [M]uch of these average costs of housing an inmate are fixed, as they go toward maintaining and staffing facilities (which are unlikely to close as a result of a shrinking prison population). Thus, the average marginal cost of increasing or decreasing the population by one inmate is $10,363. Average annual cost per inmate housed in a Residential Reentry Center (RRC, also known as a half-way house) for the BOP is $27,003. The BOP also has custody over offenders on home confinement, for which it pays contractors a flat fee for each offender. As documented by the GAO, the reimbursement rate to contractors for each inmate in home confinement that the BOP pays is pegged to half the overall per diem rate of an RRC, or over $13,500 annually. Any policy change that transfers an inmate from a BOP facility to home confinement would, under current contracting arrangements, cost more than keeping the inmate in a BOP facility ($13,500 versus $10,363, respectively). The annual cost of supervision by probation officers, however, is about $3,347 per offender.We estimate that augmenting this traditional probation with electronic monitoring to verify home confinement would cost a total of $5,890 annually …\”

In good cost-benefit fashion, Samuels, La Vigne, and Taxy frame their report in terms of cost savings and reducing prison overcrowding. While these benefits are not to be scorned, they fall short of capturing the main policy problem here. When the government of land of the free is locking up eight times as many people as 30 years ago–and remember, this article is only about federal prisons, not state ones–imprisonment becomes a central life experience for a larger share of the population, for their extended families, and for their communities. This is much more than a budgetary issue. Changing the law so that many fewer nonviolent drug offenders are in federal prison, along the lines suggested here, would still mean that the federal prison population was, say six times as high as in 1980, instead of eight times as high.

For some earlier posts on U.S. rates of imprisonment, see \”Too Much Imprisonment\” (November 30, 2011) and \”U.S. Imprisonment in International Context: What Alternatives?\” (May 31, 2012).

103 Years to Implement: The Buy Indian Act

Here\’s a write-your-own punchline fact: The Buy Indian Act was signed into law by President William Howard Taft on June 24,  1910. The regulations that allow the law to be implemented and enforced were just completed on July 8, 2013, only 103 years later.

Paula Woessner provides an overview of he situation in \”Long-awaited rules require the BIA to \”buy Indian,\” published in the October 2013 issue of Community Dividend, a regular publication of the Federal Reserve Bank of Minneapolis. She writes:

More than a century after its passage, an act of legislation with the potential to transform the federal government’s purchasing practices in Indian Country finally has the force of law. Effective July 8, 2013, the U.S. Department of the Interior adopted final rules that require the Bureau of Indian Affairs (BIA) to give preference to Indian-owned or -controlled businesses in matters of procurement. The rules are the long-awaited last step in implementing the Buy Indian Act, a law signed on June 25, 1910. Although the act has been on the books since then, it was unenforceable until now because there were no rules adopted for implementing it. Rule writing didn’t begin in earnest until 1982 and then proceeded in fits and starts over the ensuing 30 years. It is now, at long last, completed.

It\’s a little hard here to say what happened, because the whole point is that for a century or so, not very much did happen and contracts did not typically flow firms owned by Native Americans. The key sentence of the 1910 act reads:

So far as may be practicable Indian labor shall be employed, and purchases of the products (including, but not limited to printing, notwithstanding any other law) of Indian industry may be made in open market in the discretion of the Secretary of the Interior.

Apparently the key words of this bill were not the instructions \”so far as may be practicable Indian labor shall be employed,\” but rather the phrase \”in the discretion of the Secretary of the Interior.\” Here\’s the U.S. Department of the Interior news release about the new rules.

I confess that I\’m not a big fan of rules that require the government to purchase goods and services from firms run by people of a particular ethnicity or gender. When government is buying goods and services, it should seek to get the best possible deal for taxpayers. Set-asides always come with the need for a bunch of well-meant rules. As Woessner explains,

The Buy Indian Act rules authorize the Secretary of the Interior to set aside procurement contracts for Indian economic enterprises (IEEs), which are defined as for-profit businesses that are at least 51 percent Indian-owned. The tribes or individual Indians that own the IEEs must manage the contract, receive the majority of earnings from it, and control the business’s daily operations. … Under the rules, the BIA must give Indian businesses first preference in procurement matters by seeking contract offers from at least two IEEs and then selecting one of them, so long as it is of a “reasonable and fair market price.” … Subcontracting is permitted, but at least 50 percent of the subcontracted work must go to IEEs.

So to hand out what is estimated to be about $45 million in contracts per year, the government will need to be deciding whether enterprises are truly Indian-owned, or whether the Indian ownership is a front for others. It will need to decide if the bids are a \”reasonable and fair market price.\” It will need to monitor subcontracting. When an agency has taken 103 years to write the implementation rules in the first place, it is reasonable to question how effectively it will carry out these tasks.

US Adults Lag in Competence: The PIACC

How do America\’s working-age adults stack up against their peers from the other high-income countries around the world? The OECD has started a survey called the Program for the International Assessment of Adult Competencies to answer this question. In 2011-2012, a nationally representative survey of 5,000 Americans age 16-65 took the test, along with adults from 22 other countries. The U.S. Department of Education now published an overview of the results in \”Literacy, Numeracy, and Problem Solving in Technology-Rich Environments Among U.S. Adults: Results from the Program for the International
Assessment of Adult Competencies 2012, First Look.\” The report was written by Madeline Goodman,
Robert Finnegan, Leyla Mohadjer. Tom Krenzke, and Jacquie Hogansee.

The PIAAC measures competence in several areas: literacy and reading components, numeracy, and problem solving in technology-rich environments. The evidence suggests that U.S. adults are below average in all of them. Here are three summary figures, for literacy, numeracy, and problem solving. The PIAAC average is shown, along with the US score. The unshaded bars above and below the US score are the countries with scores where the difference from the US score is not statistically significant. The shaded bars above and below show the countries where the scores are significantly different.

The OECD has also published its own first tabulation of these results, with much additional discussion, in OECD Skills Outlook 2013: First Results from the Survey of Adult Skills. It note that only three countries have below-average scores in all of these domains: along with the United States, the other two are Ireland and Poland.  In a fact sheet summarizing the US results, the OECD writes: \”U.S. performance is weak in literacy, very poor in numeracy, but only slightly below average in problem solving in technology-rich environments.\” At the end of 2013, the OECD is scheduled to publish a follow-up study called \”Time for the U.S. to Reskill? What the Survey of Adult Skills Says.\”

But in some ways, these dismal findings about the competencies of US adults should be no surprise to regular readers of this blog. The US high school graduation rate rose from 6% in 1900 to 80% in 1970–and then had essentially no increase for 30 years before edging up a bit in the last decade or so. In the late 1960s the US led the world in high school graduation rate, but by 2000, it was below the average of high-income countries. Similarly, the US used to lead the world in the share of population that went on to additional education after high school,  but now it is middle of the pack. The share of Americans age 25-34 with a tertiary degree is lower than for those who are age 35-64–by this measure, access to higher education is diminishing, despite the fact that the US  spending far more than one would expect on higher educationi based on the US level of per capita GDP.

A prominent report back in 1983 argued that the US was engaging in \”unthinking, unilateral educational disarmament.\” The PIAAC evidence is a report card on the poor overall performance of the U.S. education system during the last four decades.