The 2012 Nobel Prize to Shapley and Roth

The official announcement reads this way: \”The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2012 was awarded jointly to Alvin E. Roth and Lloyd S. Shapley `for the theory of stable allocations and the practice of market design.\’\” Thus, the prize seeks to emphasize the interplay between mathematical economic theory and concrete applications. Each year when the Nobel prize is awarded, the Prize Committee puts up some useful background material to explain their choice: their \”Popular Information\” paper is here and the \”Scientific Background\” paper is here. I\’ll draw on both in what follows.

In this duality between theory and application, Lloyd Shapley is cast as the theorist. Indeed, in an interview with the AP on Monday, he said: \”I consider myself a mathematician and the award is for economics. I never, never in my life took a course in economics.\” But of course, economists view their field as a big enough tent to include at least some mathematicians–and in particular those that study game theory, like the 1994 Nobel prize to John Nash and others.

In Shapley and co-author David Gale published a famous paper in 1962 in the American Mathematical Monthly called \”College Admissions and the Stability of Marriage (69:1, pp. 9-15).  It can be read for free (with registration) at JSTOR, or it is available on the web various places like here.

 They begin by offering college admission as an example of the kind of problem they are considering. There are a number of colleges and a number of applicants. Colleges have preferences over who they wish to admit, and applicants have preferences over where they would like to attend. How can they be matched in a way that will, in some sense we need to define, \”satisfy\” both sides? Before going further, notice that this problem of multiple parties on each side, with a problem of matching them so as to \”satisfy\” all parties, is a characteristic of marriage markets–although in that case the two groups are looking for only one partner apiece–and also of employers and potential employees in the job market.

As a starting point, it is clearly impossible to \”satisfy\” all parties in the sense that everyone will get their first choice. The college can\’t assure that all its preferred applicants will want to attend; not all applicants are likely to get their first choice. Thus, Gale and Shapley focused instead on finding a solution that would be \”stable,\” which means in the context of the college admissions choice that once everyone is matched up, there is no combination of a student who would rather be at a college other than the one they are attending AND that college would also prefer to have that student above one of the students it had already attracted. In other words, no student or college will seek to make an end-run around a stable mechanism.

Gale and Shapley proposed a \”deferred acceptance\” procedure to get a stable result to their matching problem. Here is how the Nobel committee describes the process:

\”Agents on one side of the market, say the medical departments, make offers to agents on the other side, the medical students. Each student reviews the proposals she receives, holds on to the one she prefers (assuming it is acceptable), and rejects the rest. A crucial aspect of this algorithm is that desirable offers are not immediately accepted, but simply held on to: deferred acceptance. Any department whose offer is rejected can make a new offer to a different student. The procedure continues until no department wishes to make another offer, at which time the students fi…nally accept the proposals they hold.

In this process, each department starts by making its fi…rst offer to its top-ranked applicant, i.e., the medical student it would most like to have as an intern. If the offer is rejected, it then makes an offer to the applicant it ranks as number two, etc. Thus, during the operation of the algorithm, the department’\’s expectations are lowered as it makes offers to students further and further down its preference ordering. (Of course, no offers are made to unacceptable applicants.) Conversely, since students always hold on to the most desirable offer they have received, and as offers cannot be withdrawn, each student’s satisfaction is monotonically increasing during the operation of the algorithm. When the departments’decreased expectations have become consistent with the students’increased aspirations, the algorithm stops.\”

 Here\’s how the procedure would work in the marriage market:

\”The Gale-Shapley algorithm can be set up in two alternative ways: either men propose to women, or women propose to men. In the latter case, the process begins with each woman proposing to the man she likes the best. Each man then looks at the different proposals he has received (if any), retains what he regards as the most attractive proposal (but defers from accepting it) and rejects the others. The women who were rejected in the first round then propose to their second-best choices, while the men again keep their best offer and reject the rest. This continues until no women want to make any further proposals. As each of the men then accepts the proposal he holds, the process comes to an end.\”

Gale and Shapley prove that this procedure leads to a \”stable\” outcome. Again, this doesn\’t mean that everyone gets their first choice! It means that when the outcome is reached, there is no combination of medical school and applicant, or of man and woman in the marriage example, who would both prefer a different match from the one with which they ended up. But Gale and Shapley went further. It turns out that there are often many stable combinations, and in comparing these stable outcomes, the question of who does the choosing matters. If women propose to men, women will view the outcome as the best of all the stable matching possibilities, while men will view it as the worst; if men propose to women, men as a group will view it as the best of all stable matching possibilities, while women will view it as the worst. As the Nobel committee writes, \”stable institutions can be designed to systematically favor one side of the market.\”

There are many other potential tweaks and twists here. What if monetary payments are part of the match? What if traders are trading indivisible objects? These sorts of issues and many others kept a generation of game theorists busy. But for present purposes, the next key insight is that although I\’ve explained the deferred-acceptance procedure as a step-by-step process, where parties make one offer at a time, an equivalent process can be run by a clearinghouse, if the parties submit sufficient information.

And this is where Alvin Roth enters the picture, bringing in detailed practical implications for analysis. He pointed out in a 1984 paper that the National Resident Matching Program for matching residencies and medical school students was actually a close cousin to the Gale-Shapley procedure. Roth\’s theoretical analysis pointed out that the form of the match they were using allowed the medical schools, rather than the students, to be the \”proposers,\” and thus created outcomes that the medical schools viewed as the best of the stable options and the students viewed as the worst of the stable options. He redesigned the \”match\” both to let students be the proposers, and also to address the issue that in a number of cases a married or committed couple wanted to end up at the same school or in the same geographic location.

Roth then found other applications for what he has called the \”market design\” approach.  (It is ironic but true that the market design approach can include prices if desired, but doesn\’t actually need prices to function.) The key problems in these matching scenarios often involve timing. There is often pressure on various parties–whether marriage or college admissions–to commit early, which can lead to situations where the outcome will \”unravel\” as people seek a way out of their too-early commitments. On the other side, if the process slogs along too late, then \”congestion\” can result when an offer is turned down and it becomes too late to make other offers.

Roth found applications of the Shapley matching approach in a wide variety of academic matching settings, both in the U.S. and in other countries. He also applied a similar process to students choosing between public schools in New York City and Boston. More recently, he has sought to apply these insights to the problem of matching kidney donors with those in need of a kidney transplant. (In fairness, it should be pointed out that Roth has written a number of strong theoretical papers as well, but the Nobel committee emphasized his practical concerns, and I will follow their lead here.) As one might expect, these real-world cases raise various practical problems. Are there ways of gaming the system by not listing your first choice, which you are perhaps unlikely to get anyway, and pretending great enthusiasm for your fifth choice, which you are more likely to get? Such outcomes are sometimes possible in practical settings, but it proves much harder to game these mechanisms than one might think. Usually, you\’re better off just giving your true preferences and seeing how the mechanism plays itself out.

The Nobel prize to Shapley and Roth is one of those prizes that I suspect I will have a hard time explaining to non-economists. The non-economists I know ask practical questions. They want to know how the work done for the prize will spur the economy, or create jobs, or reduce inequality, or help the poor, or save the government money. Somehow, better matching for medical school students won\’t seem, to some non-economists like it\’s \”big enough\” to deserve a Nobel. But economics isn\’t all about today\’s public policy questions. The prize rewards thinking deeply about how a matching process works. In a world of increasingly powerful information-processing technology, where we may all find ourselves \”matched\” in various ways based on questions we answer by software we don\’t understand, I suspect that Alvin Roth\’s current applications are just the starting point for ways to apply the insights developed from Lloyd Shapley\’s \”deferred acceptance\” mechanism. 

Patents Tipping Too Far: Three Examples

The basic economics of patents as taught in every intro econ class is a balancing act: On one side, patents provide an incentive for innovation, by giving innovators a temporary monopoly over the use of their invention. This temporary monopoly rewards innovation by allowing the inventor to charge higher prices, and thus the tradeoff is that consumers temporarily pay more–although consumers of course also benefit from the existence of the innovation. Like any balancing act, patents can tip too far in one direction or the other. On one side, patents can fail to provide providing sufficient incentive (that is, large enough profits) for inventors. But on the other side, patent protection that is too long or too rigid can lock profits for early innovators for an extended period, both at the long-term expense of consumers and also in a way that can cut off possibilities for future innovators.

There\’s is some concern that the balance of patent law has tipped in a way that is overly favorable to earlier innovators. Without trying to make the case in any detail, here are three straws in the wind of this argument that recently crossed my desk.

1) Has the specialized federal appeals court for patent cases run amok?  

Back in the 1970s, it seemed clear that the enforcement by patents was wildly uneven. Thus, in 1982 a United States Court of Appeals for the Federal Circuit, one step below the U.S. Supreme Court was created to hear all appeals of patent decisions from around the country. The difficulties are described by Timothy B. Lee under the (perhaps slightly overstated) title, \”How a rogue appeals court wrecked the patent system,\” appearing in ArsTechnica.

Lee tells the stories of the whacky 1970s, when companies that got patents literally raced to file infringement suits in jurisdictions that they thought would be favorable, while competitors raced to file infringement suits in jurisdictions that they thought would be favorable–because who filed first would often determine where the cases would be consolidated, the jurisdiction where the case was heard would largely determine the outcome. Here\’s what happened in the 1970s: \”Every Tuesday at noon, a crowd would gather at the patent office awaiting the week\’s list of issued patents.
As soon as a patent was issued, a representative for its owner would rush to the telephone and order a lawyer stationed in a patent-friendly jurisdiction such as Kansas City to file an infringement lawsuit against the company\’s competitors. Meanwhile, representatives for the competitors would rush to the telephone as well. They would call their own lawyers in patent-skeptical jurisdictions like San Francisco and urge them to file a lawsuit seeking to invalidate the patent. Time was of the essence because the two cases would eventually be consolidated, and the court that ultimately heard the case usually depended on which filing had an earlier timestamp.\”

But unsurprisingly to any student of political economy, the new court ended up being staffed by lawyers who believed in very strong patent enforcement. The share of patents that were found to be infringed went from 20-30% in most of the 1960s and 1970s up to more like 50-80% for most years of the 1980s. Lee tells the story in more detail, but the new court eventually allowed software to be patented, and \”business methods\” to be patented. \”Microsoft received just five patents during the 1980s and 1,116 patents during the 1990s, for instance. Between 2000 and 2009? The company received 12,330 patents …\” Since 2006, the U.S. Supreme Court has overruled at least four major decisions from this lower court.

Lee argues: \”Either way, breaking the Federal Circuit\’s monopoly on patent appeals may be the single most important step we can take to fix the patent system. The Federal Circuit looks likely to undermine other reforms undertaken by Congress, just as it has resisted the Supreme Court\’s efforts to bring balance to patent law. Only by extending jurisdiction over patent appeals to other appeals courts that are less biased toward patent holders can Congress return common sense to our patent system.\”

2)  Why are patent cases being decided in the International Trade Commission?

In May 2012, the International Trade Commission affirmed an earlier decision to ban imports of a number of Motorola Android-based smartphones and other mobile devices because they infringed a Microsoft patent that involves software for scheduling meetings–software that is rarely used and by one estimate is worth 33 cents per phone. The case raises standard difficult issues about how innovation can flourish in an industry like high-tech electronics where products use dozens of overlapping patents, and thus every new product is susceptible to claims that it is infringing on some patent, somewhere.

But for me, the eye-opening part of the case was the decision-maker: Why was the International Trade Commission deciding what was essentially a patent infringement case? K. William Watson tells some of this story in \”Still a Protectionist Trade Remedy: The Case for Repealing Section 337,\” which was published as Cato Policy Analysis #708. It turns out that under Section 337 of the Tariff Act of 1930, the ITC has power to block imports of any products that involve \”unfair means of competition.\” The ITC process is faster than the courts, and has often proven quite friendly to existing patent-holders. The ITC remedy of shutting off imports is very costly. Watson summarizes the argument this way:

\”The current state of the global “patent wars” in the mobile device industry aptly demonstrates the risks posed by Section 337. Courts have been perfectly capable of imposing strong remedies to deal with patent infringement, which sometimes include banning a product from the market. Most of
these disputes, however, are merely part of a business model where competitors must collaborate
to pool together the many patented technologies that make up cutting-edge consumer products such as smartphones and tablet computers. While companies would love to have their competitors’ products forced off the shelf, the truth is that many of the disputes involve patents that are worth
only a tiny fraction of the product’s total value, meaning that injunctive relief is not always appropriate. … There is only one simple and effective solution: repeal the law. The ITC has no business imitating a court of law and is not equipped to do so. The foreign origin of a product does not make it necessary to subject its producer to a separate regime that more quickly and forcefully
settles intellectual property disputes. The existence of two distinct patent enforcement mechanisms disrupts the balance of U.S. patent law and, because one mechanism is only available to challenge imports, violates U.S. trade obligations.\”

 3) When brand-name drug companies compensate generic drug companies to delay in entering the market.  

Brand-name drug companies typically have a patent, and when that patent expires, it then becomes possible for manufacturers of generic drugs to enter that market. But this process isn\’t necessarily smooth, as the Federal Trade Commission explains in a recent amicus brief in U.S. District Court.

When a brand-name firm is expecting the potential entry of a generic producer, a standard strategy for the brand-name firm is that, when its patent expires, it can start selling a generic equivalent of its own drug. The FTC finds that this strategy can cut the cut the profits for the new generic producer by 40-50%. For example, when the brand drug Paxil went off-patent, a company called Apotex was the first to be allowed to sell a generic equivalent, and the first generic company to enter gets a 180-day period of exclusivity as an encouragement to enter. Apotex was expecting sales of about $550 million in that 180 day window, but then the maker of Paxil put out its own generic version, and Apotex generates only $150-200 million in sales during that period.

This scenario creates the possibility for an anti-competitive deal: the original drug seller agrees not to produce its own generic equivalent, and the new generic producer agrees to delay entry into the market. The result is that new competition from the generic is delayed, and as the FTC explains: \”Both effects are harmful to consumers, who face higher drug prices over a longer period.\” The FTC argues in its brief that brand-name pharma firms should not be allowed to compensate generic producers for delay in entering the market–whether that compensation takes the form of a direct payment or the form of an agreement not to start its own line of generics. The FTC also points out that such payments are fairly rare–and thus presumably not necessary in finalizing the switch from a patented product to one that can be sold in generic version.

In short, even what might seem like a simple transition–the act of a patent expiring–can be fraught with practical difficulties and ways for the incumbent firm to extract just a little more in profit. The practical world of patents is vastly more complex and ambiguous than the standard textbook tradeoff.

High School Classes in Economics or Personal Finance

The Council of Economic Education focuses on K-12 economic education. I just ran across their biennial Survey of the States: Economics and Personal Finance Education in our Nation\’s Schools, which was actually released last spring. I\’ll offer here an overview of the requirements for such courses across states, and then a few thoughts about the useful focus of such courses. After all, for many students who won\’t taken an economics course in college or won\’t attend college, a high school course in these topics may be all the background they have before they start taking out credit cards, leasing a new car, or finding a mortgage broker willing to set them up with a zero-down-payment balloon-mortgage home loan.

As the map shows, pretty much all states have economics somewhere in their state standards. However, only 40 states require that this standard actually be implemented by school districts (!); 26 states require that schools offer a course that is specifically economics; 22 states of those states require that students take such a course; and 16 of those states have some required testing for economics students.

Personal finance courses are less widespread: that is, 46 states have some personal finance in their state standards: 36 states require school districts to implement that standard (!, again); 14 states require that a high school course be offered in the subject; 13 of those states require that students take such a course; and five of those states have required testing on personal finance topics.

My eldest son is in high school, and my daughter will soon be in high school, so I\’m highly sensitive to the fact that a high school day has only so many class hours. It\’s not a useful suggestion to tell schools that they need to keep adding additional classes.  In particular, my guess is that if students are to take separate courses in both economics and in personal finance, then there will be real trade-offs elsewhere in the curriculum.

But when I think about high school students from say, the 60th percentile and down–many of whom will either not attend college or, if they do, not complete a four-year degree–it seems to me that it should be possible for a single course to provide a solid background in topics like household budgeting, credit cards, insurance, personal saving, personal taxes, and car loans, home loans, and student loans. In addition, it seems to me that, with some thought, such material could be intertwined with a very basic course in economic principles. For example, discussions of household budgeting could move to an explanation of demand curves, and how quantity demanded adjusts to changes in price. Discussions of personal taxes could be a basis for talking about one way in which the federal government raises funds. Discussions of credit cards can open up topics like when consumers are protected by the presence of competition, and when there is an argument for regulation. Discussions of car loans and home loans can be used as a basis for talking about what the Federal Reserve is seeking to do when it raises or lowers interest rates.

 But my vision of this hybrid economics/personal finance course faces a number of obstacles. Some high school economics courses are prepping for the AP exam; others are a stripped-down version of the AP economics course for those not planning to take the exam. In my (limited) experience, too many high school economics courses are a stripped-down and abbreviated version of what at many colleges and universities is a full-year sequence in microeconomics and macroeconomics. In my vision, the economics portion of the course would need to be stripped down much more, to make room for the personal finance component. This kind of course would also require a certain amount of teacher training, mainly so that social studies teachers who are often primarily focused on teaching history and government can teach such courses effective.

Finally, my sense is that many high school curriculum standard face a serious problem of mission creep: that is, they start off thinking about what the median student should be required to learn–which is already too high a standard, because 50% of all students are inevitably below the median. Then those writing the standards start adding bits and pieces, all of which seem like good ideas, but the result is a set of \”requirements\” aimed at the top slice of the high school class. Then high school teachers in the trenches need to figure out how to cover these expanded and inflated requirements in a classroom where the students cover the whole vast range of abilities and backgrounds.

Of course, there are other models, like working some personal finance topics into the math classes that students take, or into the home economics course.My children seem to go through the same set of warnings about alcohol, smoking, and drug use every year. Before they take up smoking and drinking out of sheer boredom, I wouldn\’t mind seeing some of that time devoted to risky financial behavior.

But one way or another, financial literacy matters a lot. Many young people are starting off their adult lives by maxing out their credit cards, taking out car loans and student loans that they can\’t afford, and then facing years of dealing with ill-informed choices made when they were in their late teens and early 20s. In a short essay accompanying the CEE report, Annamaria Lusardi points out that there is a developing sense in education systems across the world that financial literacy is an important part of a high school education. Lusardi writes: ; \”Given the importance of financial literacy, it is perhaps not surprising that, in 2012, the OECD Program for International Student Assessment (PISA) will dedicate an entire module to financial literacy, in addition to the topics they normally cover.\” It\’s time to find a niche for both financial and economic literacy–maybe in the same course?–in the high school curriculum.

When Tradeable Pollution Permits Fall Short

Like a lot of economists, I occasionally break into a semi-spontaneous song-and-dance about how tradeable pollution permits have all sorts of advantages. In an old-fashioned command-and-control system, every firm needs to reduce pollution emissions to a given standard, even though for some firms meeting that standard will be cheap and easy and for other firms it will be costly and difficult. In a system of tradeable pollution permits, firms that can reduce pollution less expensively can do so, and sell their extra pollution permits to other firms. As a result, the goal of limiting emissions can be reached more cheaply. Even better, firms can make money by seeking out innovative ways to cut emissions, because when pollution permits can be sold or need to be bought, there is a clear financial incentive to do so.

 Dallas Burtraw has preached this gospel of tradeable pollution permits many times himself, which is part of why I was intrigued by his recent paper \”The Institutional Blind Spot in Environmental Economics\” (Resources for the Future Discussion Paper 12-41, August 2012). He argues that systems of tradeable pollution permits have one severe flaw:  when they set the level of pollution that is allowed to be emitted in future years, they cannot foresee future development that may cause that level to be implausibly high.

For example, the 1990 Clean Air Act set up a framework for using tradable pollution permits that sought to reduce emissions of sulfur dioxide by half from 1980 levels. The program was a success, reducing emissions at cost that were probably 40% or so below the cost of a command-and-control pollution rules. My own Journal of Economic Perspectives ran a couple of articles on the program (here and here) back in the Summer 1998 issue.

Burtraw offers an updated view. As the costs of reducing SO2 emissions and the benefits of doing so became better understood, it seemed clear that SO2 emissions should be reduced much farther and faster. Congress proved unable to make such a chance legislatively, but regulators pushed forward in various ways. The result is that the program for trading SO2 pollution permits worked for a time in the 1990s but since then became irrelevant, with additional declines in S02 trading being driven by old-fashioned regulatory actions. Here\’s Burtraw (footnotes omitted):

 \”The trading program was statutorily created in the Clean Air Act Amendments of 1990 and led to cost reductions of roughly 40 percent compared to traditional approaches under the Clean Air Act. However, the program had what literally became a fatal flaw: namely, an inability to adjust to new scientific or economic information. Though information current in 1990 suggested that benefits of the program would be nearly equal to costs, by 1995 there was strong evidence that benefits were an order of magnitude greater than costs. Today the Environmental Protection Agency would argue that benefits are more than thirty times the costs. Unfortunately, to change the stringency of the program requires an act of Congress, at least according to the D.C. Circuit Court. The Act locked in the emissions cap, and despite several legislative initiatives to change the stringency of the trading program, none have been successful. …

\”If the nation’s fate with respect to sulfur dioxide emissions were left to Congress, tens of billions of dollars in additional environmental and public health costs would have been incurred in the last few years and into the future. Fortunately, the inability of Congress to act was backstopped by the regulatory ratchet of the Clean Air Act that triggers a procession of regulatory initiatives based on scientific findings that have been effective in shaping investment and environmental behavior in the electricity sector.\”

\”The sulfur dioxide cap-and-trade program was intended to reduce sulfur dioxide emissions from power plants from anticipated levels of 16 million tons per year to 8.95 million tons per year by 2010. However, evidence based on integrated assessment suggests an efficient level would be just over 1 million tons per year. In the absence of legislative action, regulatory initiatives have taken effect and driven emissions from power plants to 5.157 million tons, as measured in 2010. By 2015, the Clean Air Interstate Rule and the Mercury and Air Toxics Standard will further reduce emissions to 2.3 million tons per year. In doing so, the emissions constraint under the 1990 Clean Air Act amendment has become irrelevant, and the price of those tradable emissions allowances has fallen from several hundred dollars a ton to near zero.\”

\”The sulfur dioxide cap-and-trade program is the flagship example of the use of economic instruments in environmental policy. However, since its adoption in 1990, although the sulfur dioxide trading program gets most of the credit in textbooks, more than half of the emissions reductions that have and will occur are due to regulation.\”

 In short, a scheme for trading pollution permits only works because it sets a firm limit on how much pollution can be emitted–often a limit that is declining over time–and then allows trading of permits within that limit. But if faster reductions in the level of pollution seem useful or cost-effective, perhaps because of changes in the market or in scientific information, the pollution quota often can\’t be changed. Moreover, the idea that the pollution limit might be adjusted up or down in the future would make it hard for a market of pollution permits to operate.

Burtraw points out  that a similar dynamic applies to reducing carbon emissions. Back in 2009, the Waxman-Markey bill that would have set up a cap-and-trade system for carbon emissions passed the House of Representatives but failed in the Senate. At the time, supporters of the bill made some dire predictions about how carbon emissions would increase as a result. But here\’s the unexpected aftermath. Back at the time of Waxman-Markey, the goal was to reduce carbon emissions by about 10% by 2020, relative to 2005 levels.

However, when Waxman-Markey failed, other events happened. California has imposed rules to limit carbon emissions, along with some other states. The Environmental Protection Agency has looked for ways under existing rules to reduce carbon emissions. The new rules requiring higher fuel economy will reduce carbon emissions. And the advent of more plentiful and cheaper natural gas will reduce carbon emissions. Thus, Burtraw writes: \”Total reductions [in carbon dioxide emissions] by 2020—accounting for changes due to subnational policy, regulatory actions under the Clean Air Act, and advantageous secular trends—are on track to yield emissions reductions of 16.7 percent relative to 2005 levels. The anticipated emissions reductions under the Clean Air Act regime exceed those reductions within the United States that would have occurred under cap and trade.\”

Again, if cap-and-trade legislation had passed back in 2009, it would have set an overall limit on carbon emissions. With that limit in place, any other changes–like cheaper natural gas or higher fuel efficiency standards for cars–would have just made it easier to meet the pollution limit in the cap-and-trade standard. Those unexpected gains in reducing carbon emissions would probably just have led to less need to reduce carbon emissions in any other way.

None of this means that tradeable pollution permits are a bad idea. After all, they were effective in reducing S02 emissions in a cost-effective manner in the 1990s, as well as reducing lead emissions in the 1980s. But when there are ongoing changes in the economic factors affecting the magnitude of emissions, the technology for reducing emissions, and the scientific evidence about the cost of emissions, setting a specific limit on the quantity of pollution allowed may be even harder than it looks.

Why GDP Growth is Good

Most teachers of economics at some point have to address the existential question from students: Is more output always good? Nicholas Oulton does has a nice punchy essay called \”Hooray for GDP!\”, written as an \”Occasional paper\” for the Centre for Economic Performance at the London School of Economics and Political Science. Oulton summarizes the main arguments against focusing on GDP in this way:

  1.  GDP is hopelessly flawed as a measure of welfare. It ignores leisure and women’s
    work in the home. It takes no account of pollution and carbon emissions.
  2. GDP ignores distribution. In the richest country in the world, the United States, the
    typical person or family has seen little or no benefit from economic growth since the
    1970s. But over the same period inequality has risen sharply.
  3. Happiness should be the grand aim of policy. But the evidence is that, above a certain level, a higher material standard of living does not make people any happier. …
  4. Even if higher GDP were a good idea on other grounds, it’s not feasible because the
    environmental damage would be too great.

Oulton then addresses each question, not attempting any kind of exhaustive review, but by providing a selective sampling of the arguments and evidence. Here are some of  his answers, mixed with my own.

1. GDP is flawed as a measure of welfare. 

 Yes, GDP leaves out a lot that matters, and a lot that should matter. There\’s no surprise in this: Every intro econ textbook for decades has taught this point. My favorite quotation on this point from a 1968 speech by Robert Kennedy.

 Oulton makes the useful distinction that GDP is a measure of output that is not and was never intended to be a measure of welfare, but that per capita GDP is clearly a component of welfare–that is, when one makes a list of all the factors that benefit people, a higher level of consumption of a wide range of goods and services is an item on that list. In addition, per capita GDP is a broader indicator of welfare because looking around the world, GDP is clearly broadly correlated with health, education, democracy, and the rule of law.

For thinking about social welfare, it is often useful to look at statistics other than GDP. For example, here\’s one of my earlier posts about economists attempting to estimate \”Household Production: Levels and Trends.\”

My own favorite comment on this point is from a 1986 essay by Robert Solow (\”James Meade at Eighty,\” Economic Journal, December 1986, 986-988), where he wrote: \”If you have to be obsessed by something, maximizing real National Income is not a bad choice.\” At least to me, the clear implication is that it\’s perhaps better not to be obsessed by one number, and instead to cultivate a broader and multidimensional perspective. But yes, if you need to pick one number, real per capita GDP isn\’t a bad choice. To put it another way, a high or rising GDP certainly doesn\’t assure a high level of social welfare, but it makes it easier to accomplish those goals than a low and falling GDP.

2) GDP ignores distribution. 

Yes, it does. Again, GDP is a measure of output, not of everything that can and should matter in thinking about society. I\’ve often noted on this website that inequality of wages and household incomes has been rising in recent decades, and that I believe this trend is a genuine problem.

But even though high and rising inequality is (I believe) a problem, that doesn\’t mean that high or rising GDP is the cause of the problem It\’s not at all clear that being in an economy with a higher level of GDP leads to more inequality. From a global perspective, many economies with the greatest level of inequality are in Latin America or in Africa. Many high-income countries in western Europe have much greater equality of incomes than the U.S. economy. Periods of rapid economic growth in the U.S. economy–say, back in much of the 1950s and the 1960s–were not associated with rising inequality.

Oulton writes: \”Inequality concerns are real but there is still a case in my view for separating questions of growth from questions of distribution.\” In my own mind, this analytical distinction started in earnest (although I\’m sure there were predecessors) with John Stuart Mill\’s classic 1848 text, Principles of Political Economy, where the first major section of the book is about \”Production\” and the second major section is about \”Distribution.\” In Mill\’s \”Autobiography,\” he writes that  he came to appreciate this distinction, and indeed to view it as one of the central distinguishing features of his book, as a result of discussions with his wife, Harriet Taylor Mill. Mill wrote:

\”The purely scientific part of the Political Economy I did not learn from her; but it was chiefly her influence that gave to the book that general tone by which it is distinguished from all previous expositions of political economy that had any pretension to being scientific…. This tone consisted chiefly in making the proper distinction between the laws of the Production of wealth—which are real laws of nature, dependent on the properties of objects—and the modes of its Distribution, which, subject to certain conditions, depend on human will.\”


3) Happiness should be the grand aim of policy. 

The question here, of course, is how \”happiness\” is judged. It\’s true that on surveys which ask people to rank how happy they are on a scale from 1-10, the happiness level of people in high-income countries isn\’t much higher than a few decades ago. There is an ongoing argument over how to interpret these results. Is happiness really \”positional\”–that is, I judge my happiness relative to others at the same time, and so if everyone has more consumption, happiness doesn\’t rise? Are these kinds of survey results an artefact of the survey itself: that is, someone who answers that they are \”7\” on the happiness scale in 2010 isn\’t saying that they would also be a \”7\” on the happiness scale if they had a 1970 level of income. Here\’s a post from last May on the connections from economic growth to survey questions about happiness, with some emphasis on how it applies in China.

My sense is that most people actually get a lot of happiness from the goods and services of a modern economy, and they would not be equally happy if those goods and services were unavailable. Oulton makes an interesting argument here that there is a battle between process innovation and product innovation.  If both process innovation and product innovation rise together, then people have higher productivity and incomes, and happily spend those incomes on the new products that are available. If process innovation rises quickly, but product innovation does not, then people would have higher productivity and incomes, but nothing extra to spend them on–and thus might opt for much more leisure. Oulton has a nice thought experiment here:

\”Imagine that over the 220 or so years since the Industrial Revolution began in Britain process innovation has taken place at the historically observed rate but that there has been no product innovation in consumer goods (though I allow product innovation in capital goods). UK GDP per capita has risen by a factor of about 12 since 1800. So people today would have potentially vastly higher incomes than they did then. But they can only spend these incomes on the consumer goods and services that were available in 1800. In those days most consumer expenditure was on food (at least 60% of the typical family budget), heat (wood or coal), lighting (candles) and clothing (mostly made from wool or leather). Luxuries like horse-drawn carriages were available to the rich and would now in this imaginary world be available to everyone. But there would be no cars, refrigerators, washing machines or dishwashers, no radio, cinema, TV or Internet, no rail or air travel, and no modern health care (e.g. no antibiotics or antiseptics). How many hours a week, how many weeks a year and how many years out of the expected lifetime would the average person be willing to work? My guess is that in this imaginary world people would work a lot less and take a lot more leisure than do real people today. After all, most consumer expenditure nowadays goes on products which were not available in 1800 and a lot on products not invented even by 1950.\”

Of course, over the last century or so workweeks have gotten considerably shorter, and in that sense, people have chosen to take some of the rewards of process innovation in the form of more leisure. But most people prefer to follow a path where they can earn sufficient income to enjoy the results of product innovation. As I like to point out, the modern economy offers a fair amount of freedom in terms of work choices.  Throughout their lives, people often have a choice about whether they will choose to follow a job path that is less demanding in time and energy, but also provides lower income. Some people seek out such choices, but most do not.

4) GDP and the costs of environmental damage. 

Oulton quotes from a 2012 Royal Society report that is concerned about overpopulation and a sustainable environment. He writes: \”In its preferred scenario GDP per capita is equalised across the world at $20,000 in 2005 PPP terms by 2050 (Report, page 81). The UK’s GDP per capita in 2005 was $31,580 in 2005 PPPs so this would imply a 37% cut. When they think about economic growth natural scientists tend to think about biological processes, say the growth of bacteria in a Petri dish. Seed the dish with a few bacteria and what follows looks like exponential growth for a while. But eventually as the bacteria cover most of the dish growth slows down. When the dish is completely covered growth stops. End of story.\”

 Of course, the world economy isn\’t a petri dish, and people aren\’t bacteria. Economist have been drawing up models of economic growth with fixed amounts of land or minerals, or where economic activities emit pollution, for some decades now. Oulton summarizes the basic lesson: \”These models all have in common the result that perpetual exponential growth is possible provided that technical progress is sufficiently rapid.\”

In other words, it\’s certainly possible to draw up a disaster scenario where resource or environmental limitations lead to grief at a global level. It\’s also possible that with a combination of investments in technology and human capital, economic growth can at least for a considerable time overcome such limitations. For an example of analysis along these lines, the United Nations has put out the first of what is intended to be a series of reports on how changes in different types of capital can offset each other (or not), which I posted about in \”Sustainability and the Inclusive Wealth of Nations.\”

As Oulton notes, the practical question here is not whether resource and environment limits must eventually bind at some distant point in the future, \”but only whether it makes sense to advocate growth over the next 5, 10, 25, 50 or 100 years.\” 

In the U.S. economy, 15% of the population is below what we call the \”poverty line,\” and their life prospects are diminished as a result. About 2.5 billion people in the world live on less than $2/day
 I do not see a practical way of raising the standard of living for these people, or for their children, unless rising GDP plays a central role.

Does Big-Time Football Reduce College Grades?

 The University of Oregon Ducks football team is undefeated and ranked second in the country after beating Washington last weekend by a score of 52-21. But three economists from the University of Oregon–Jason M. Lindo, Isaac D. Swensen, and Glen R. Waddell–are using data from their school to ask \”Are Big-Time Sports a Threat to Student Achievement?\” Their analysis appear in the American Economic Journal: Applied Economics 2012, 4(4): 254–274. The journal isn\’t freely available on-line, but many in academia will have access through library subscriptions.

Here is the approach they take: \” Our primary source of data is University of Oregon student transcripts, covering all undergraduate classes administered from fall quarter of 1999 through winter
quarter of 2007. … We combine these data with readily available reports of the football team’s win-loss records … Over our sample period, the winning percentage is 69.7 percent, on average, and varies from 45.5 percent to 90.9 percent.\” Because the researchers have data on individual students, they can make a statistical comparison of how the grade point average for an individual student changes from year to year, and see if it is correlated with the winning percentage of the football team.  They can also do a number of other calculations, like adjusting for a time trend so that grade inflation is taken into account, as well as looking at how responses differ by gender, by income level (measured by which students are receiving financial aid), and by test scores before entering the university.

There\’s one additional element of complexity here: In most college classes, grades are given according to some explicit or implicit \”curve\”: that is, even if the academic performance of all students was worse one fall in absolute terms, if a certain percentage of students get As, Bs, Cs, and so on, then grade point average might not show the drop in absolute level of performance. This suggests two kinds of comparisons: 1) male students in their data are more likely to watch football than female students, so one can look at the how the grade gap between male and female students is related to the winning percentage of the football team; and 2) one can compare fall grades when the football team is playing to winter/spring term grades. Lindo, Swensen, and Waddell summarize their results this way:

\”That is, our preferred estimates are based on considering how a student’s grades deviates from his or her own average grades as the winning percentage varies from its average, and then how this response varies across gender. With our analysis we show that male grades fall significantly with the success of the football team, both in absolute terms and relative to females. There is also pronounced heterogeneity among students, suggesting that the impact is largest among students from relatively disadvantaged backgrounds and those of relatively low ability. …

\”Relative to females, males report being more likely to increase alcohol consumption, decrease studying, and increase partying around the success of the football team. Yet, both male and female students report that their behavior is responsive to athletic success. This suggests that female performance is likely affected by the performance of the football team as well, but that this effect is masked by grade curving. … [A] 25 percentage point increase in the football team’s winning percentage will increase the gender gap in GPAs … by 8.5 percent.\”

After comparing fall and winter academic terms, \”only in the quarter we associate with football—the fall quarter—is there movement in the gender gap in academic  performance that varies systematically with athletic success.\”

I\’ll spare you a homily on the \”true purpose\” of higher education, and the extent to which big-time sports supports or undermines that purpose. But for those who hold the misapprehension that college sports provide a financial subsidy to the academic programs of these institutions, Lindo, Swensen, and Waddell toss out one cold fact: \”In 2010, 211 out of 218 Division I athletics departments at universities subject to open records laws received a subsidy from their student body or general fund. These subsidies are substantial and rapidly growing. From 2006 to 2010, the average subsidy increased 25 percent, to $9 million.\”

Jobs: A World Bank Perspective

The theme for the 2013 World Development Report from the World Bank is one word: \”Jobs.\” The discussion reaches usefully beyond the issue of recovery from the Great Recession (although that topic is covered as well) and looks at issues of job creation in the long run. At least to me, much of the underlying message is not so much about the raw numbers of jobs that need to be created, but is the idea that stable relationships between employers and employees, with shared benefits for both, are part of a broader web of social institutions and interrelationships.

\”As the world struggles to emerge from the global crisis, some 200 million people—including 75 million under the age of 25—are unemployed. Many millions more, most of them women, find themselves shut out of the labor force altogether. Looking forward, over the next 15 years an additional 600 million new jobs will be needed to absorb burgeoning working-age populations, mainly in Asia and Sub-Saharan Africa. Meanwhile, almost half of all workers in developing countries are engaged in small-scale farming or self-employment, jobs that typically do not come with a steady paycheck and benefits.

The problem for most poor people in these countries is not the lack of a job or too few
hours of work; many hold more than one job and work long hours. Yet, too often, they are not earning enough to secure a better future for themselves and their children, and at times they are working in unsafe conditions and without the protection of their basic rights. Jobs are instrumental to achieving economic and social development. Beyond their critical importance for individual well-being, they lie at the heart of many broader societal objectives, such as poverty reduction, economy-wide productivity growth, and social cohesion. The development payoffs from jobs include acquiring skills, empowering women, and stabilizing post-conflict societies.\”

Here\’s a figure showing on the left  how much absolute job creation is needed in various regions of the world by 2020, given population growth, and on the left, the annual rates of job creation needed. The challenge of creating jobs with decent and growing compensation at these rates is an enormous one.

In much of the world, it\’s important to remember that those counted as having a \”job\” have neither an employer nor a regular paycheck.

\”To many, a “job” brings to mind a worker with an employer and a regular paycheck. Yet, the majority of workers in the poorest countries are outside the scope of an employer-employee relationship. Worldwide, more than 3 billion people are working, but their jobs vary greatly. Some 1.65 billion are employed and receive regular wages or salaries. Another 1.5 billion work in farming and small household enterprises, or in casual or seasonal day labor. Meanwhile, 200 million people, a disproportionate share of them youth, are unemployed and actively looking for work. Almost 2 billion working-age adults, the majority of them women, are neither working nor looking for work, but an unknown number of them are eager to have a job.\”

As the figure shows, 70-80% of those who are have jobs in sub-Saharan Africa have nonwage employment. The figure illustrates that the prevalence of wage employment is actually closely associated with economic development, and more broadly with whether the society is one in which organizations called private firms have the ability and flexibility to create themselves and to expand.

World Bank reports always make me smile a bit when they discuss the role of the private and the public sector. My sense is that many of the readers of such reports are skeptical of free market economics. Thus, the economists at the World Bank find themselves needing to straddle the fence: on one side, they do speak up for the importance of the private sector and free markets; on the other side, they spend a lot of words pointing out that government has an important role to play, and leaving the door open for the possibility that certain government interventions might be useful. Thus, here\’s the report on the centrality of the private sector in creating jobs:

\”[T]he private sector is the main engine of job creation and the source of almost 9 of every 10 jobs in the world. Between 1995 and 2005, the private sector accounted for 90 percent of jobs created in Brazil, and for 95 percent in the Philippines and Turkey. The most remarkable example of the expansion of employment through private sector growth is China. In 1981, private sector employment accounted for 2.3 million workers, while state-owned enterprises (SOEs) had 80 million workers. Twenty years later, the private sector accounted for 74.7 million workers, surpassing, for the first time, the 74.6 million workers in SOEs. In contrast to the global average, in some countries in the Middle East and North Africa, the state is a leading employer, a pattern that can be linked to the political economy of the postindependence period, and in some cases to the abundance of oil revenues. For a long period, public sector jobs were offered to young college graduates. But as the fiscal space for continued expansion in public sector employment shrank, “queuing” for public sector jobs became more prevalent, leading to informality, a devaluation of educational credentials, and forms of social exclusion. A fairly well-educated and young labor force remains unemployed, or underemployed,and labor productivity stagnates.\”

And on the other side of the fence, here\’s the report on the importance of government, along with implications that a great many types of government interventions in labor markets can be justified.

\”While it is not the role of governments to create jobs, government functions are fundamental for sustained job creation. The quality of the civil service is critically important for development, whether it is teachers building skills, agricultural extension agents improving agricultural productivity, or urban planners designing functional cities. Temporary employment programs for the demobilization of combatants are also justified in some circumstances. But as a general rule it is the private sector that creates jobs. The role of government is to ensure that the conditions are in place for strong private-sector-led growth, to understand why there are not enough good jobs for development, and to remove or mitigate the constraints that prevent the creation of more of those jobs. Government can fulfill this role through a three-layered policy approach:
 • Fundamentals….  Macroeconomic stability, an enabling business environment, human capital accumulation, and the rule of law are  among the fundamentals. … Adequate infrastructure, access to finance, and sound regulation are key ingredients of the business environment. Good nutrition, health, and education outcomes not only improve people’s lives but also equip them for productive employment….
 • Labor policies. …  Labor policy should avoid two cliffs: the distortionary interventions that clog the creation of jobs in cities and in global value chains, and the lack of mechanisms for voice and protection for the most vulnerable workers, regardless of whether they are wage earners. …
• Priorities. Because some jobs do more for development than others, it is necessary to understand where good jobs for development lie, given the country context.

This mildly split personality–between emphasizing the private sector and markets on one side and looking at possible roles for government on the other side–is probably an occupational hazard for economists. I doubtless suffer the affliction myself. But that said, I fear that the World Bank report may understate the difficulties of large-scale private-sector job creation in many countries around the world. Ultimately, true job creation occurs when an employer does not rely on government subsidies or handouts, and instead is producing a product at a price that customers actually want to buy. It requires letting firms fail, when they prove incapable of meeting this standard. And it requires letting firms continue and grow when they do meet this standard, even though a successful and growing firm can become a potential source of political and economic power that might in some way challenge the existing government. All of this is a long way of saying that when 40%, 50%, or 80% of the workers in an economy are involved in non-wage employment, the transition to a situation in which private-sector firms are numerous and large enough to offer wage employment to a very large proportion of the adults in a country is an enormous and difficult task.

Working From Home: Census Estimates

I try to work from home a couple of days each week: it saves me the commuting time, and I can be home when the children get off the school-bus in the afternoon. Thus, I\’m also intimately aware of the temptations of working from home, like a sudden overpowering urge to re-arrange the living room furniture or to bake a batch of cookies. In its report on \”Home-Based Workers in the United States: 2010,\” a group of U.S. Census Bureau analysts (Peter J. Mateyka, Melanie A. Rapino, and Liana Christin Landivar) steer clear of the gains and losses from working at home, and lay out the statistics on how widespread the practice is. The underlying data comes from two sources that are not directly comparable, because they ask about home-based work in somewhat different ways: the Survey of Income and Program Participation (SIPP) and the American Community Survey (ACS). That said, here are a few facts that jumped out at me. 

The proportion of workers who work from home has increased in the last decade or so, but it remains relatively low. \”The percentage of all workers who worked at least 1 day at home increased from 7.0 percent in 1997 to 9.5 percent in 2010, according to SIPP. During this same time period, the population working exclusively from home in SIPP increased from 4.8 percent of all workers to 6.6 percent … The percentage of workers who worked the majority of the workweek at home increased from 3.6 percent to 4.3 percent of the population between 2005 and 2010, according to the ACS.\”

Those who work at home part of the time and at a workplace part of the time typically have higher incomes either than those who are at a workplace all the time or those who are at home all the time. \”Median personal earnings for mixed workers were significantly higher ($52,800) compared with onsite ($30,000) and home ($25,500) workers. While home workers had lower personal earnings than onsite workers did, respondents that reported working at least 1 day at home had significantly higher household incomes than respondents that reported working only onsite.\”

The prevalence of home-based work has followed a U-shape in recent decades, falling in the 1960s and 1970s but rising since then. \”In the 1960s, home-based workers were primarily self-employed family farmers and professionals, including doctors and lawyers. Home-based work in the United States declined from 1960 to 1980, driven by changes in market conditions and the agriculture industry that began decades prior and favored large specialized firms over family farms. In 1980, the multiple-decade decline in home-based work reversed, led partly by self-employed home-based workers in professional and service industries.\”

The most rapid

\”Between 2000 and 2010, there was a 67 percent increase in home-based work for employees of
private companies. Although still underrepresented among homebased workers, the largest increase in home-based work during this decade was among government workers, increasing 133 percent
among state government workers and 88 percent among federal government workers.

Those \”mixed\” workers who are partly at home and partly at the office are more likely to work at home on Mondays and Fridays (a fact that seems to me creates some suspicion about how much work is actually getting done). \”About 90 percent of home workers reported working
Monday through Friday at home, compared with less than 40 percent of mixed workers. The most
popular days worked at home for mixed workers were Monday (37.6 percent) and Friday (37.8 percent) …\”

The top three cities for home-based work are Boulder, CO, Medford, OR, and Santa Fe, NM. Two of the top ten areas are in my home state of Minnesota–small cities with a branch of the state university system about an hour\’s drive from Minneapolis and St. Paul.

Brazilian Soap Operas and Fertility Rates

There\’s always a chicken-and-egg question about the interrelationship between television and social values. Does the behavior shown in television shows change social values, or does it just reflect changes in social values that have already happened? Eliana La Ferrara, Alberto Chong, and Suzanne Duryea offer one example in which television does seem to have altered social values in \”Soap Operas and Fertility: Evidence from Brazil.\” The paper appears in the most recent issue of the American Economic Journal: Applied Economics (2012, 4(4): 1–31). The journal isn\’t freely available on-line, but those in academia will often have on-line access through their libraries. 

To disentangle the cause and effect of television and social values, the ideal experiment would be to have some random selection areas with television, and other nearby areas without television–and then to compare across areas. While a pure random experiment of this kind is hard to come by, many developing countries saw a dramatic expansion in the availability of television from the 1970s up through the 1990s. Thus, researchers can look at what happened in different areas as television coverage arrived. La Ferrara, Chong, and Duryea start their discussion this way:

\”In the early 1990s, after more than 30 years of expansion of basic schooling, over 50 percent of 15 year olds in Brazil scored at the lowest levels of the literacy portion of the Programme for International Student Assessment (PISA), indicating that they could not perform simple tasks, such as locating basic information within a text. People with 4 or fewer years of schooling accounted for 39 percent of the adult population in the urban areas, and nearly 73 percent in rural areas as measured by the 2000 census. On the other hand, the share of households owning a television set had grown from 8 percent in 1970 to 81 percent in 1991, and remained approximately the same 10 years later. The spectacular growth in television viewership in the face of slow increases in education levels characterizes Brazil as well as many other developing countries. Most importantly, it suggests that a wide range of messages and values, including important ones for development policy, have the potential to reach households through the screen as well as through the classroom. …\”

\”We are interested in the effect of exposure to one of the most pervasive forms of cultural communication in Brazilian society, soap operas or novelas. Historically, the vast majority of the Brazilian population, regardless of social class, has watched the 8 pm novela. In the last decades, one group, Rede Globo, has had a virtual monopoly over the production of Brazilian novelas. Our content analysis of 115 novelas aired by Globo in the two time slots with the highest audience between 1965 and 1999 reveals that 72 percent of the main female characters (age 50 and lower) had no children at all, and 21 percent had only one child. This is in marked contrast with the prevalent fertility rates in Brazilian society over the same period.\”

Fertility rates have been dropping in Brazil in the last few decades, as in many other countries. \”The total fertility rate was 6.3 in 1960, 5.8 in 1970, 4.4 in 1980, 2.9 in 1991, and 2.3 in 2000. It is noteworthy that this decline was not the result of deliberate government policy. In Brazil no official population control policy was enacted by the government and, for a period of time, advertising of contraceptive methods was even illegal. The change therefore originated from a combination of supply factors related to the availability of contraception and lower desired fertility.\” Of course, there is a standard argument among economists and demographers that as a country go through a \”demographic transition,\” as the economy become wealthier and life expectancies increase, fertility rates decline. The estimates in this study suggest that receiving the TV signal can account for about 7 percent of the overall decline in fertility. They write: \”Globo coverage is associated with a decrease in the probability of giving birth of 0.5 percentage points, which is 5 percent of the mean. The magnitude of this effect is comparable to that associated with an increase of 1.6 years in women’s education.\”

Their investigation also turned up a number of supportive connections. For example: \” The (negative) effect of Globo exposure is stronger for households with lower education and wealth, as one would expect given that these households are relatively less likely to get information from written sources or to interact with peers that have small family sizes. the effect of exposure to soap operas in Brazil.\” \”We find that decreases in fertility were stronger in years immediately following novelas that portrayed messages of upward social mobility, consistent with the desire to conform with behavior that leads to positive life outcomes.\” \”Also, we find that the effect of Globo availability in any given year was stronger for women whose age was closer to that of the main female characters portrayed that year.\”

And most striking to me: \”[W]e estimate the probability that the 20 most popular names chosen by parents for their newborns in a given metropolitan area include one or more names of the main characters of novelas aired in the year in which the child was born. This probability is 33 percent if the area where parents lived received the Globo signal and only 8.5 percent if it did not, a statistically
significant difference. Since novela names tend to be very idiosyncratic in Brazil, we take this evidence as suggestive of a strong link between novela content and behavior.\”

Eyeballing the World Economy

Like most people in  my professional universe, I try to carry around in my head some basic facts and comparisons. Here are some basic tables and figures about the world economy, which I have edited and adapted (by leaving out some smaller countries) from the 2012 edition of \”Charting International Labor Market Comparisons\” published by the U.S. Bureau of Labor Statistics. 

As a starting point, make a mental list of the large economies in the world–say, those with GDP larger than $1 trillion. Here\’s a figure listing them, where GDP calculated in 2010 U.S. dollars using purchasing power parity exchange rates. For most people, it\’s not a big surprise to see the U.S. at the top, followed by China and Japan. But seeing India ahead of Germany and the United Kingdom is a bit unexpected, as is seeing Brazil ahead of Italy, Mexico ahead of Spain, and South Korea ahead of Canada.

One aspect of that omnibus term \”globalization\” is that the U.S. economy plays a smaller role in the world economy, while China and other nations are playing a larger role. Here\’s a figure showing the shift in what countries are producing what share of global GDP from 1990-2010. The decline in Japan\’s importance is more than offset by the rise of China. The U.S. and European share of world output is falling, and the \”rest of the world\” share is rising.

Of course, comparisons across countries don\’t take population differences into account. As a final test, form a mental picture of what per capita GDP or per capita GDP per worker would look like for 2010. Again, it\’s no shock to see the U.S. economy near the top of the list (although Norway, not shown here, is actually a little higher in both categories).  At least to me, it is a modest surprise to see the western European countries outranking Japan so decisively, and a bigger surprise to see that South Korea is now so close behind Japan. Mexico considerably exceeds Brazil, with both Latin American countries lagging behind eastern Europe. And China remains quite poor on a per capita basis. As the BLS reports: \”Gross domestic product (GDP) per capita in the United States was approximately six times larger than the GDP per capita in China.\” Of course, this also suggests that China has the potential, with appropriate policies, for at least several decades more of very rapid economic growth.