What\’s Good for General Motors …

As President Obama and Mitt Romney jostled back and forth about the bailout of General Motors and Chrysler during the debate last night, I was naturally reminded of the 1953 confirmation hearings for Charles E. Wilson for Secretary of Defense. Wilson had been president of GM since 1941, overseeing both the company\’s transformation to wartime production and then its return to peacetime. Much of the confirmation hearing revolved around how he would sell off or insulate his financial holdings from his government job–and more broadly, the difficulties of separating his role at GM from the role of Secretary of Defense.

On January 15, 1953, Wilson had this famous exchange with Senator Robert Hendrickson, a Republican from  New Jersey:

\”Senator Hendrickson. Mr. Wilson, you have told the committee, I think more than once this morning, that you see no area of conflict between your interest in the General Motors Corp. or the other companies, as a stockholder, and the position you are about to assume.
Mr. Wilson. Yes, sir.
Senator Hendrickson. Well now, I am interested to know whether if a situation did arise where you had to make a decision which was extremely adverse to the interests of your stock and General Motors Corp. or any of these other companies, or extremely adverse to the company, in the interests of the United States Government, could you make that decision?
Mr. Wilson. Yes, sir; I could. I cannot conceive of one because for years I thought what was good for our country was good for General Motors, and vice versa. The difference did not exist. Our company is too big. It goes with the welfare of the country. Our contribution to the Nation is quite considerable. I happen to know that toward the end of the war—I was coming back from Washington to New York on the train, and I happened to see the total of our country\’s lend-lease to Russia, and I was familiar with what we had done in the production of military goods in the war and I thought to myself, \”My goodness, if the Russians had a General Motors in addition to what they have, they would not have needed any lend-lease,\” so I have no trouble—I will have no trouble over it, and if I did start to get into trouble I would put it up to the President to make the decision on that one. I cannot conceive of what it would be.
Senator Hendrickson. Well, frankly, I cannot either at the moment, but we never know what is in store for us.
Mr. Wilson. I cannot conceive of it. I do not think we are going to get into any foolishness like seizing the properties or anything like that, you know, like the Iranians are in over there, when they got into —
Senator Hendrickson. I certainly hope not.
Mr. Wilson. You see, if that one came up for some reason or other then I would not like that. I do not think I would be on the job; I think I would quit because I would be so out of sympathy with trying to nationalize the industries of our country. I think it would be a terrible thing. That is about the only one I can think of. Of course, I do not think that is even a remote possibility. I think the whole trend of our country is the other way.\”

I\’ve quoted here from the transcript of the actual hearings as printed in \”Nominations: Hearings before the Committee on Armed Services, United States Senate, Eighty-third Congress, first session, on nominee designates Charles E. Wilson, to be Secretary of Defense; Roger M. Kyes, to be Deputy Secretary of Defense; Robert T. Stevens, to be Secretary of the Army; Robert B. Anderson, to be Secretary of the Navy; Harold E. Talbott, to be Secretary of the Air Force …\” January 15, 1953.

But the hearing had been closed to the public, and the transcript didn\’t come out for a few days. When reporters asked what had been said, they were told that Wilson had simply replied: \”What\’s good for General Motors is good for the country.\” Democrats picked up the phrase on the campaign trail and used it against Republicans for being overly pro-business. The highly popular Li\’l Abner comic strip had a character named General Bullmoose who often said: \”What\’s good for General Bullmoose is good for the U.S.A.!\”

The story goes that for a few years, when the quotation came up, Wilson would try to offer some context, but after awhile he stopped bothering. When he stepped down as Secretary of Defense in 1957, he said: \” \”I have never been too embarrassed over the thing, stated either way.\”

Of course, it\’s interesting that the Democratic party that bashed Charlie Wilson back in 1953 now finds itself in the position of arguing the modern version of \”what\’s good for General Motors is good for the country.\” For those who want details about the actual bailout, my May 7 post on \”The GM and Chrysler Bailouts\” might be a useful starting point.

Here, I would just make the point that while GM remains an enormous company today, it was relatively much larger in the 1950s. In the Fortune 500 for 1955, General Motors was far and away the biggest U.S. company ranked by sales. GM had $9.8 billion in sales in 1955, with Exxon running second at $5.6 billion, U.S. Steel third at $3.2 billion, followed by General Electric at $3 billion. The GDP of the U.S. economy in in 1955 was $415 billion, so for perspective, GM sales were 2.3% of the U.S. economy.

In 2012, GM\’s sales are $150 billion, but in the Fortune 500 for 2012, it now runs a distant fifth in sales among U.S. firms. The two biggest firms by sales were Exxon Mobil at $450 billion in sales, just a few billion ahead of WalMart. The GDP of the U.S. economy is about $15 trillion in 2012, so GM sales are now more like 1% of GDP, rather than 2%. And many of those who argue that it was in the national interest to give GM more favorable government-arranged bankruptcy conditions, rather than the usual bankruptcy court, would likely be quite unwilling to give bailouts to Exxon Mobil or to WalMart–despite how much larger they are.

U.S. Birth Rates

Birth rates are one of those apparently dry statistics that have profound implications both for public policy and for how we live our day-to-day lives. Here are two figures from the Centers for Disease Control \”National Vital Statistics Reports–Births: Preliminary Data for 2011.\”

Overall, U.S. birth rates are in decline (as shown by the light blue line) have fallen below the levels seen in the Great Depression, which up until recent years had been the lowest fertility level in U.S. history. The 2011 fertility rate of 63.2 per 1,000 women aged 15-44 is the lowest in U.S. history. The number of actual births is actually near a high, but that is because of the overall expansion of population.

This decline in birthrates is not evenly distributed across age groups. The figure below shows only data back to 1990. However, it illustrates that for women age 15-19 (red line), 20-24 (light blue line) and 25-29 (yellow line), birth rates have dropped noticeablyl. Although birth rates have dropped substantially for younger women, they have held steady for women ages and up. For mothers int he three age groups above 30 years, birth rates have been holding steady or rising.

These shifts in birthrates have powerful implications for our lives. Politically, the number of households with a direct personal interest in supporting programs that benefit children–because they have children under age 18 living at home–has declined. Here\’s a figure illustrating the long-term trend up through 2008 from Census Bureau report.  Back in the late 1950s and early 1960s, about 57% of U.S. family households included children under the age of 18. The share has been dropping since then, and is now approaching 45%. When it comes to decisions about everything from school funding and public parks up to pensions, health care, and other payments to those who are retired, it makes a real difference in a majority-rule political system if the share of families with children at home is more than 50% of the population or less.

The shift in birth rates represents a dramatic change in our personal lives as well.  In an article in the Fall 2003 issue of my own Journal of Economic Perspectives, economist and demographer Ronald Lee offered one point in particular that has stuck with me: \”In 1800, women spent about 70 percent of their adult years bearing and rearing young children, but that fraction has decreased in many parts of the world to only about 14 percent, due to lower fertility and longer life.\”

More people are going through young adulthood and setting their original template for adult living without becoming parents, but then having children later into their 20s, or their 30s, or their 40s. Many people are raising children not in their 20s or 30s, but instead in their 40s and 50s. And with longer life expectancies, many people will find that while their care-giving responsibilities for children cover fewer years, their care-giving responsibilities for adult parents. Multi-generational family reunions used to follow a pattern of a few older folks of the grandparent generation, a larger number of their adult children and spouses, and then lots of children. In the future, there may be four or five generations in attendance at multigenerational reunions, and with many people having fewer or no children, the \”family tree\” will look taller and skinnier. We are already living our family lives in a substantially different context than earlier generations. The changes are large and getting larger, although sociologists and the science fiction novelists probably have deeper insights than do the economists into how such changes are likely to affect our personal lives.

Can Agricultural Productivity Keep Growing?

As the world population continues to climb toward a projected population of 9 billion or so by mid-century, can agricultural productivity keep up? Keith Fuglie and Sun Ling Wang offer some thoughts in \”New Evidence Points to Robust But Uneven Productivity Growth in Global Agriculture,\” which appears in the September issue of Amber Waves, published by the Economic Research Service at the U.S. Department of Agriculture.

Food prices have been rising for the last decade or so. Fuglie and Wang offer a figure that offers some perspective. The population data is from the U.N, showing the rise in world population from about 1.7 billion in 1900 to almost 7 billion in 2010. The food price data is a a weighted average of 18 crop and livestock prices, where the prices are weighted by the share of agricultural trade for each product. Despite the sharp rise in demand for agricultural products from population growth and higher incomes, the rise in productivity of the farming sector has been sufficient so that the price of farm products fell by 1% per year from 1900 to 2010 (as shown by the dashed line).

What are some main factors likely to affect productivity growth in world agriculture in the years ahead? Here are some of the reactions I took away from the paper.

Many places around the world are far behind the frontier of agricultural productivity, and thus continue to have considerable room for catch-up growth. \”Southeast Asia, China, and Latin America are now approaching the land and labor productivity levels achieved by today\’s industrialized nations in the 1960s.\”

The rate of output growth in agriculture hasn\’t changed much, but the sources of that output growth have been changing from a higher use of inputs (machinery, irrigation, fertilizer) and toward a higher rate of productivity growth. \”Global agricultural output growth has remained remarkably consistent over the past five decades–2.7 percent per year in the 1960s and between 2.1 and 2.5 percent average annual growth in each decade that followed. … Between 1961 and 2009, about 60 percent of the tripling in global agricultural output was due to increases in input use, implying that improvements in TFP accounted for the other 40 percent. TFP\’s share of output growth, however, grew over time, and by the most recent decade (2001-09), TFP accounted for three-fourths of the growth in global agricultural production.\”

Sub-Saharan Africa has perhaps the greatest need for a productivity surge, because of low incomes and expected rates of future population growth, but is hindered by a lack of institutional capacity to sustain the mix of public- and private-sector agricultural R&D that benefits local farmers. \”Sub-Saharan Africa faces perhaps the biggest challenge in achieving sustained, long-term productivity growth in agriculture. … Raising agricultural productivity growth in Sub-Saharan Africa will likely require significantly higher public and private investments, especially in agricultural research and extension, as well as policy reforms to strengthen incentives for farmers.
Perhaps the single, most important factor separating countries that have successfully sustained long-term productivity growth in agriculture from those that have not is their capacity for agricultural R&D. Countries with national research systems capable of producing a steady stream of new technologies suitable for local farming systems generally achieve higher growth rates in agricultural TFP. … Improvements in what can broadly be characterized as the \”enabling environment\” have encouraged the adoption of new technologies and practices by some countries; these include policies that improve economic incentives for producers, strengthen rural education and agricultural extension services, and improve rural infrastructure and access to markets.\” 

There\’s no denying that feeding the global population as it rises toward nine billion will pose some real challenges, but it is clearly within the realm of possibility. For more details on how it might be achieved, here\’s my post from October 2011 on the subject of \”How the World Can Feed 9 Billion People.\”

U.S. Labor Unions: Fewer and Bigger

John Pencavel offers an angle on the evolution of American unions that I haven\’t seen recently by looking at the number of unions and their size. Short story: the number of American unions has declined, but the biggest unions have many more members than their precedessors. Pencavel has a nice summary of this work in \”Public-Sector Unions and the Changing Structure of U.S. Unionism,\” written as a \”Policy Brief\” for the Stanford Institute for Economic Policy Research.

As a starting point, the first table shows the top five U.S. unions by total number of members in 1974, and then the top five unions by total number of members in 2007. A couple of patterns jump out. First, back in 1974, most of the biggest unions–except for the National Education Association–were private-sector unions. However, by 2007, most of the biggest unions were public-sector union. BSecond, both the biggest union in 2007 (the NEA) and the fifth-biggest union in 2007 (the UFCW) were substantially larger than the first- and fifth-biggest unions in 1974.

 Pencavel compiles evidence that from 1974 to 2007, the total number of unions declined by 101, much of that due to unions consolidating with others. Unions with more than 1 million members actually had 2.7 million more total members in 2007 than back in 1974; conversely, unions with fewer than 1 million total members had 6.8 million fewer total members in 2007 than in 1974. In other words, union members are much more likely to belong to a very large unions now than a few decades ago.  Pencavel also compiles a longer-run table, going back to 1920, to show the share of unions members belonging to the biggest unions. Notice that most of the shift toward much larger unions has occurred since the early 1980s, and most of it traces to a change in the membership share of the largest unions.


I suppose that one response to all this is \”so what\”? Well, if the largest companies in the U.S. economy in any given industry gain a dramatically larger share of the U.S. economy, it gives rise to comment. When the largest unions gain a dramatically larger share of the union sector, it\’s worth some introspection, too. 

Pencavel draws his conclusions cautiously. \”[A] union movement concentrated in a smaller number of large unions implies a union movement in which much of its wealth is allocated by a smaller number of decision-makers. …  A serious concern is that a more concentrated union movement dominated by public sector unions may politicize unionism: That is, the focus of union activity will be less on attending to grievances and to the conditions of members at their place of work and more on issues that are the province of politics. Unions have always been involved in politics so this would be a change of degree, not of kind, but it is an important change because, ultimately, more politicized unionism will not help the typical union worker.\”

 Albert Venn Dicey was a British legal theorist who died in 1922. Pencavel refers to one of his works from 1912, when Dicey was mulling over the issue of freedom of association, both in thinking about association through labor unions and association through product cartels. Pencavel notes that Dicey perceived a \”classic dilemma\” here, and Pencavel writes: \”The principle that working people should have the freedom to form associations that represent and guard their interests is an intrinsic feature of a liberal society. But, if these associations exploit this principle to procure entitlements that expense of others, a new base of authority and influence is created that, at best, enjoys a greater share of the national wealth and, at worst, challenges the jurisdiction of the state. A balance is needed between promoting the principle of free association and avoiding the creation of a mischievous organization.\”

Dicey perceived this dilemma for freedom of association as applying both to the power of large unions and the power of large corporations. Here\’s Pencavel quoting Dicey in 1912: \”In England, as elsewhere, trade unions and strikes, or federations of employers and lock-outs;…in the United States,
the efforts of Mercantile Trusts to create for themselves huge monopolies…force upon public attention the practical difficulty of so regulating the right of association that its exercise may
neither trench upon each citizen’s individual freedom nor shake the supreme authority of the
State.”

The central question about large public sector unions, of course, is whether in their ability to act as an powerful interested group that influences the results of state and local elections and then to have their conditions of employment determined in negotiations with officials of those same state and local governments, they have exploited the freedom of association in a way that creates what Pencavel calls a \”mischievous organization.\” Pencavel ends on a questioning note: \”This is not a prediction; it is a concern.\”

 For more about U.S. union members, see my post \”Some Facts about American Unions\” from last March. 

The 2012 Nobel Prize to Shapley and Roth

The official announcement reads this way: \”The Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 2012 was awarded jointly to Alvin E. Roth and Lloyd S. Shapley `for the theory of stable allocations and the practice of market design.\’\” Thus, the prize seeks to emphasize the interplay between mathematical economic theory and concrete applications. Each year when the Nobel prize is awarded, the Prize Committee puts up some useful background material to explain their choice: their \”Popular Information\” paper is here and the \”Scientific Background\” paper is here. I\’ll draw on both in what follows.

In this duality between theory and application, Lloyd Shapley is cast as the theorist. Indeed, in an interview with the AP on Monday, he said: \”I consider myself a mathematician and the award is for economics. I never, never in my life took a course in economics.\” But of course, economists view their field as a big enough tent to include at least some mathematicians–and in particular those that study game theory, like the 1994 Nobel prize to John Nash and others.

In Shapley and co-author David Gale published a famous paper in 1962 in the American Mathematical Monthly called \”College Admissions and the Stability of Marriage (69:1, pp. 9-15).  It can be read for free (with registration) at JSTOR, or it is available on the web various places like here.

 They begin by offering college admission as an example of the kind of problem they are considering. There are a number of colleges and a number of applicants. Colleges have preferences over who they wish to admit, and applicants have preferences over where they would like to attend. How can they be matched in a way that will, in some sense we need to define, \”satisfy\” both sides? Before going further, notice that this problem of multiple parties on each side, with a problem of matching them so as to \”satisfy\” all parties, is a characteristic of marriage markets–although in that case the two groups are looking for only one partner apiece–and also of employers and potential employees in the job market.

As a starting point, it is clearly impossible to \”satisfy\” all parties in the sense that everyone will get their first choice. The college can\’t assure that all its preferred applicants will want to attend; not all applicants are likely to get their first choice. Thus, Gale and Shapley focused instead on finding a solution that would be \”stable,\” which means in the context of the college admissions choice that once everyone is matched up, there is no combination of a student who would rather be at a college other than the one they are attending AND that college would also prefer to have that student above one of the students it had already attracted. In other words, no student or college will seek to make an end-run around a stable mechanism.

Gale and Shapley proposed a \”deferred acceptance\” procedure to get a stable result to their matching problem. Here is how the Nobel committee describes the process:

\”Agents on one side of the market, say the medical departments, make offers to agents on the other side, the medical students. Each student reviews the proposals she receives, holds on to the one she prefers (assuming it is acceptable), and rejects the rest. A crucial aspect of this algorithm is that desirable offers are not immediately accepted, but simply held on to: deferred acceptance. Any department whose offer is rejected can make a new offer to a different student. The procedure continues until no department wishes to make another offer, at which time the students fi…nally accept the proposals they hold.

In this process, each department starts by making its fi…rst offer to its top-ranked applicant, i.e., the medical student it would most like to have as an intern. If the offer is rejected, it then makes an offer to the applicant it ranks as number two, etc. Thus, during the operation of the algorithm, the department’\’s expectations are lowered as it makes offers to students further and further down its preference ordering. (Of course, no offers are made to unacceptable applicants.) Conversely, since students always hold on to the most desirable offer they have received, and as offers cannot be withdrawn, each student’s satisfaction is monotonically increasing during the operation of the algorithm. When the departments’decreased expectations have become consistent with the students’increased aspirations, the algorithm stops.\”

 Here\’s how the procedure would work in the marriage market:

\”The Gale-Shapley algorithm can be set up in two alternative ways: either men propose to women, or women propose to men. In the latter case, the process begins with each woman proposing to the man she likes the best. Each man then looks at the different proposals he has received (if any), retains what he regards as the most attractive proposal (but defers from accepting it) and rejects the others. The women who were rejected in the first round then propose to their second-best choices, while the men again keep their best offer and reject the rest. This continues until no women want to make any further proposals. As each of the men then accepts the proposal he holds, the process comes to an end.\”

Gale and Shapley prove that this procedure leads to a \”stable\” outcome. Again, this doesn\’t mean that everyone gets their first choice! It means that when the outcome is reached, there is no combination of medical school and applicant, or of man and woman in the marriage example, who would both prefer a different match from the one with which they ended up. But Gale and Shapley went further. It turns out that there are often many stable combinations, and in comparing these stable outcomes, the question of who does the choosing matters. If women propose to men, women will view the outcome as the best of all the stable matching possibilities, while men will view it as the worst; if men propose to women, men as a group will view it as the best of all stable matching possibilities, while women will view it as the worst. As the Nobel committee writes, \”stable institutions can be designed to systematically favor one side of the market.\”

There are many other potential tweaks and twists here. What if monetary payments are part of the match? What if traders are trading indivisible objects? These sorts of issues and many others kept a generation of game theorists busy. But for present purposes, the next key insight is that although I\’ve explained the deferred-acceptance procedure as a step-by-step process, where parties make one offer at a time, an equivalent process can be run by a clearinghouse, if the parties submit sufficient information.

And this is where Alvin Roth enters the picture, bringing in detailed practical implications for analysis. He pointed out in a 1984 paper that the National Resident Matching Program for matching residencies and medical school students was actually a close cousin to the Gale-Shapley procedure. Roth\’s theoretical analysis pointed out that the form of the match they were using allowed the medical schools, rather than the students, to be the \”proposers,\” and thus created outcomes that the medical schools viewed as the best of the stable options and the students viewed as the worst of the stable options. He redesigned the \”match\” both to let students be the proposers, and also to address the issue that in a number of cases a married or committed couple wanted to end up at the same school or in the same geographic location.

Roth then found other applications for what he has called the \”market design\” approach.  (It is ironic but true that the market design approach can include prices if desired, but doesn\’t actually need prices to function.) The key problems in these matching scenarios often involve timing. There is often pressure on various parties–whether marriage or college admissions–to commit early, which can lead to situations where the outcome will \”unravel\” as people seek a way out of their too-early commitments. On the other side, if the process slogs along too late, then \”congestion\” can result when an offer is turned down and it becomes too late to make other offers.

Roth found applications of the Shapley matching approach in a wide variety of academic matching settings, both in the U.S. and in other countries. He also applied a similar process to students choosing between public schools in New York City and Boston. More recently, he has sought to apply these insights to the problem of matching kidney donors with those in need of a kidney transplant. (In fairness, it should be pointed out that Roth has written a number of strong theoretical papers as well, but the Nobel committee emphasized his practical concerns, and I will follow their lead here.) As one might expect, these real-world cases raise various practical problems. Are there ways of gaming the system by not listing your first choice, which you are perhaps unlikely to get anyway, and pretending great enthusiasm for your fifth choice, which you are more likely to get? Such outcomes are sometimes possible in practical settings, but it proves much harder to game these mechanisms than one might think. Usually, you\’re better off just giving your true preferences and seeing how the mechanism plays itself out.

The Nobel prize to Shapley and Roth is one of those prizes that I suspect I will have a hard time explaining to non-economists. The non-economists I know ask practical questions. They want to know how the work done for the prize will spur the economy, or create jobs, or reduce inequality, or help the poor, or save the government money. Somehow, better matching for medical school students won\’t seem, to some non-economists like it\’s \”big enough\” to deserve a Nobel. But economics isn\’t all about today\’s public policy questions. The prize rewards thinking deeply about how a matching process works. In a world of increasingly powerful information-processing technology, where we may all find ourselves \”matched\” in various ways based on questions we answer by software we don\’t understand, I suspect that Alvin Roth\’s current applications are just the starting point for ways to apply the insights developed from Lloyd Shapley\’s \”deferred acceptance\” mechanism. 

Patents Tipping Too Far: Three Examples

The basic economics of patents as taught in every intro econ class is a balancing act: On one side, patents provide an incentive for innovation, by giving innovators a temporary monopoly over the use of their invention. This temporary monopoly rewards innovation by allowing the inventor to charge higher prices, and thus the tradeoff is that consumers temporarily pay more–although consumers of course also benefit from the existence of the innovation. Like any balancing act, patents can tip too far in one direction or the other. On one side, patents can fail to provide providing sufficient incentive (that is, large enough profits) for inventors. But on the other side, patent protection that is too long or too rigid can lock profits for early innovators for an extended period, both at the long-term expense of consumers and also in a way that can cut off possibilities for future innovators.

There\’s is some concern that the balance of patent law has tipped in a way that is overly favorable to earlier innovators. Without trying to make the case in any detail, here are three straws in the wind of this argument that recently crossed my desk.

1) Has the specialized federal appeals court for patent cases run amok?  

Back in the 1970s, it seemed clear that the enforcement by patents was wildly uneven. Thus, in 1982 a United States Court of Appeals for the Federal Circuit, one step below the U.S. Supreme Court was created to hear all appeals of patent decisions from around the country. The difficulties are described by Timothy B. Lee under the (perhaps slightly overstated) title, \”How a rogue appeals court wrecked the patent system,\” appearing in ArsTechnica.

Lee tells the stories of the whacky 1970s, when companies that got patents literally raced to file infringement suits in jurisdictions that they thought would be favorable, while competitors raced to file infringement suits in jurisdictions that they thought would be favorable–because who filed first would often determine where the cases would be consolidated, the jurisdiction where the case was heard would largely determine the outcome. Here\’s what happened in the 1970s: \”Every Tuesday at noon, a crowd would gather at the patent office awaiting the week\’s list of issued patents.
As soon as a patent was issued, a representative for its owner would rush to the telephone and order a lawyer stationed in a patent-friendly jurisdiction such as Kansas City to file an infringement lawsuit against the company\’s competitors. Meanwhile, representatives for the competitors would rush to the telephone as well. They would call their own lawyers in patent-skeptical jurisdictions like San Francisco and urge them to file a lawsuit seeking to invalidate the patent. Time was of the essence because the two cases would eventually be consolidated, and the court that ultimately heard the case usually depended on which filing had an earlier timestamp.\”

But unsurprisingly to any student of political economy, the new court ended up being staffed by lawyers who believed in very strong patent enforcement. The share of patents that were found to be infringed went from 20-30% in most of the 1960s and 1970s up to more like 50-80% for most years of the 1980s. Lee tells the story in more detail, but the new court eventually allowed software to be patented, and \”business methods\” to be patented. \”Microsoft received just five patents during the 1980s and 1,116 patents during the 1990s, for instance. Between 2000 and 2009? The company received 12,330 patents …\” Since 2006, the U.S. Supreme Court has overruled at least four major decisions from this lower court.

Lee argues: \”Either way, breaking the Federal Circuit\’s monopoly on patent appeals may be the single most important step we can take to fix the patent system. The Federal Circuit looks likely to undermine other reforms undertaken by Congress, just as it has resisted the Supreme Court\’s efforts to bring balance to patent law. Only by extending jurisdiction over patent appeals to other appeals courts that are less biased toward patent holders can Congress return common sense to our patent system.\”

2)  Why are patent cases being decided in the International Trade Commission?

In May 2012, the International Trade Commission affirmed an earlier decision to ban imports of a number of Motorola Android-based smartphones and other mobile devices because they infringed a Microsoft patent that involves software for scheduling meetings–software that is rarely used and by one estimate is worth 33 cents per phone. The case raises standard difficult issues about how innovation can flourish in an industry like high-tech electronics where products use dozens of overlapping patents, and thus every new product is susceptible to claims that it is infringing on some patent, somewhere.

But for me, the eye-opening part of the case was the decision-maker: Why was the International Trade Commission deciding what was essentially a patent infringement case? K. William Watson tells some of this story in \”Still a Protectionist Trade Remedy: The Case for Repealing Section 337,\” which was published as Cato Policy Analysis #708. It turns out that under Section 337 of the Tariff Act of 1930, the ITC has power to block imports of any products that involve \”unfair means of competition.\” The ITC process is faster than the courts, and has often proven quite friendly to existing patent-holders. The ITC remedy of shutting off imports is very costly. Watson summarizes the argument this way:

\”The current state of the global “patent wars” in the mobile device industry aptly demonstrates the risks posed by Section 337. Courts have been perfectly capable of imposing strong remedies to deal with patent infringement, which sometimes include banning a product from the market. Most of
these disputes, however, are merely part of a business model where competitors must collaborate
to pool together the many patented technologies that make up cutting-edge consumer products such as smartphones and tablet computers. While companies would love to have their competitors’ products forced off the shelf, the truth is that many of the disputes involve patents that are worth
only a tiny fraction of the product’s total value, meaning that injunctive relief is not always appropriate. … There is only one simple and effective solution: repeal the law. The ITC has no business imitating a court of law and is not equipped to do so. The foreign origin of a product does not make it necessary to subject its producer to a separate regime that more quickly and forcefully
settles intellectual property disputes. The existence of two distinct patent enforcement mechanisms disrupts the balance of U.S. patent law and, because one mechanism is only available to challenge imports, violates U.S. trade obligations.\”

 3) When brand-name drug companies compensate generic drug companies to delay in entering the market.  

Brand-name drug companies typically have a patent, and when that patent expires, it then becomes possible for manufacturers of generic drugs to enter that market. But this process isn\’t necessarily smooth, as the Federal Trade Commission explains in a recent amicus brief in U.S. District Court.

When a brand-name firm is expecting the potential entry of a generic producer, a standard strategy for the brand-name firm is that, when its patent expires, it can start selling a generic equivalent of its own drug. The FTC finds that this strategy can cut the cut the profits for the new generic producer by 40-50%. For example, when the brand drug Paxil went off-patent, a company called Apotex was the first to be allowed to sell a generic equivalent, and the first generic company to enter gets a 180-day period of exclusivity as an encouragement to enter. Apotex was expecting sales of about $550 million in that 180 day window, but then the maker of Paxil put out its own generic version, and Apotex generates only $150-200 million in sales during that period.

This scenario creates the possibility for an anti-competitive deal: the original drug seller agrees not to produce its own generic equivalent, and the new generic producer agrees to delay entry into the market. The result is that new competition from the generic is delayed, and as the FTC explains: \”Both effects are harmful to consumers, who face higher drug prices over a longer period.\” The FTC argues in its brief that brand-name pharma firms should not be allowed to compensate generic producers for delay in entering the market–whether that compensation takes the form of a direct payment or the form of an agreement not to start its own line of generics. The FTC also points out that such payments are fairly rare–and thus presumably not necessary in finalizing the switch from a patented product to one that can be sold in generic version.

In short, even what might seem like a simple transition–the act of a patent expiring–can be fraught with practical difficulties and ways for the incumbent firm to extract just a little more in profit. The practical world of patents is vastly more complex and ambiguous than the standard textbook tradeoff.

High School Classes in Economics or Personal Finance

The Council of Economic Education focuses on K-12 economic education. I just ran across their biennial Survey of the States: Economics and Personal Finance Education in our Nation\’s Schools, which was actually released last spring. I\’ll offer here an overview of the requirements for such courses across states, and then a few thoughts about the useful focus of such courses. After all, for many students who won\’t taken an economics course in college or won\’t attend college, a high school course in these topics may be all the background they have before they start taking out credit cards, leasing a new car, or finding a mortgage broker willing to set them up with a zero-down-payment balloon-mortgage home loan.

As the map shows, pretty much all states have economics somewhere in their state standards. However, only 40 states require that this standard actually be implemented by school districts (!); 26 states require that schools offer a course that is specifically economics; 22 states of those states require that students take such a course; and 16 of those states have some required testing for economics students.

Personal finance courses are less widespread: that is, 46 states have some personal finance in their state standards: 36 states require school districts to implement that standard (!, again); 14 states require that a high school course be offered in the subject; 13 of those states require that students take such a course; and five of those states have required testing on personal finance topics.

My eldest son is in high school, and my daughter will soon be in high school, so I\’m highly sensitive to the fact that a high school day has only so many class hours. It\’s not a useful suggestion to tell schools that they need to keep adding additional classes.  In particular, my guess is that if students are to take separate courses in both economics and in personal finance, then there will be real trade-offs elsewhere in the curriculum.

But when I think about high school students from say, the 60th percentile and down–many of whom will either not attend college or, if they do, not complete a four-year degree–it seems to me that it should be possible for a single course to provide a solid background in topics like household budgeting, credit cards, insurance, personal saving, personal taxes, and car loans, home loans, and student loans. In addition, it seems to me that, with some thought, such material could be intertwined with a very basic course in economic principles. For example, discussions of household budgeting could move to an explanation of demand curves, and how quantity demanded adjusts to changes in price. Discussions of personal taxes could be a basis for talking about one way in which the federal government raises funds. Discussions of credit cards can open up topics like when consumers are protected by the presence of competition, and when there is an argument for regulation. Discussions of car loans and home loans can be used as a basis for talking about what the Federal Reserve is seeking to do when it raises or lowers interest rates.

 But my vision of this hybrid economics/personal finance course faces a number of obstacles. Some high school economics courses are prepping for the AP exam; others are a stripped-down version of the AP economics course for those not planning to take the exam. In my (limited) experience, too many high school economics courses are a stripped-down and abbreviated version of what at many colleges and universities is a full-year sequence in microeconomics and macroeconomics. In my vision, the economics portion of the course would need to be stripped down much more, to make room for the personal finance component. This kind of course would also require a certain amount of teacher training, mainly so that social studies teachers who are often primarily focused on teaching history and government can teach such courses effective.

Finally, my sense is that many high school curriculum standard face a serious problem of mission creep: that is, they start off thinking about what the median student should be required to learn–which is already too high a standard, because 50% of all students are inevitably below the median. Then those writing the standards start adding bits and pieces, all of which seem like good ideas, but the result is a set of \”requirements\” aimed at the top slice of the high school class. Then high school teachers in the trenches need to figure out how to cover these expanded and inflated requirements in a classroom where the students cover the whole vast range of abilities and backgrounds.

Of course, there are other models, like working some personal finance topics into the math classes that students take, or into the home economics course.My children seem to go through the same set of warnings about alcohol, smoking, and drug use every year. Before they take up smoking and drinking out of sheer boredom, I wouldn\’t mind seeing some of that time devoted to risky financial behavior.

But one way or another, financial literacy matters a lot. Many young people are starting off their adult lives by maxing out their credit cards, taking out car loans and student loans that they can\’t afford, and then facing years of dealing with ill-informed choices made when they were in their late teens and early 20s. In a short essay accompanying the CEE report, Annamaria Lusardi points out that there is a developing sense in education systems across the world that financial literacy is an important part of a high school education. Lusardi writes: ; \”Given the importance of financial literacy, it is perhaps not surprising that, in 2012, the OECD Program for International Student Assessment (PISA) will dedicate an entire module to financial literacy, in addition to the topics they normally cover.\” It\’s time to find a niche for both financial and economic literacy–maybe in the same course?–in the high school curriculum.

When Tradeable Pollution Permits Fall Short

Like a lot of economists, I occasionally break into a semi-spontaneous song-and-dance about how tradeable pollution permits have all sorts of advantages. In an old-fashioned command-and-control system, every firm needs to reduce pollution emissions to a given standard, even though for some firms meeting that standard will be cheap and easy and for other firms it will be costly and difficult. In a system of tradeable pollution permits, firms that can reduce pollution less expensively can do so, and sell their extra pollution permits to other firms. As a result, the goal of limiting emissions can be reached more cheaply. Even better, firms can make money by seeking out innovative ways to cut emissions, because when pollution permits can be sold or need to be bought, there is a clear financial incentive to do so.

 Dallas Burtraw has preached this gospel of tradeable pollution permits many times himself, which is part of why I was intrigued by his recent paper \”The Institutional Blind Spot in Environmental Economics\” (Resources for the Future Discussion Paper 12-41, August 2012). He argues that systems of tradeable pollution permits have one severe flaw:  when they set the level of pollution that is allowed to be emitted in future years, they cannot foresee future development that may cause that level to be implausibly high.

For example, the 1990 Clean Air Act set up a framework for using tradable pollution permits that sought to reduce emissions of sulfur dioxide by half from 1980 levels. The program was a success, reducing emissions at cost that were probably 40% or so below the cost of a command-and-control pollution rules. My own Journal of Economic Perspectives ran a couple of articles on the program (here and here) back in the Summer 1998 issue.

Burtraw offers an updated view. As the costs of reducing SO2 emissions and the benefits of doing so became better understood, it seemed clear that SO2 emissions should be reduced much farther and faster. Congress proved unable to make such a chance legislatively, but regulators pushed forward in various ways. The result is that the program for trading SO2 pollution permits worked for a time in the 1990s but since then became irrelevant, with additional declines in S02 trading being driven by old-fashioned regulatory actions. Here\’s Burtraw (footnotes omitted):

 \”The trading program was statutorily created in the Clean Air Act Amendments of 1990 and led to cost reductions of roughly 40 percent compared to traditional approaches under the Clean Air Act. However, the program had what literally became a fatal flaw: namely, an inability to adjust to new scientific or economic information. Though information current in 1990 suggested that benefits of the program would be nearly equal to costs, by 1995 there was strong evidence that benefits were an order of magnitude greater than costs. Today the Environmental Protection Agency would argue that benefits are more than thirty times the costs. Unfortunately, to change the stringency of the program requires an act of Congress, at least according to the D.C. Circuit Court. The Act locked in the emissions cap, and despite several legislative initiatives to change the stringency of the trading program, none have been successful. …

\”If the nation’s fate with respect to sulfur dioxide emissions were left to Congress, tens of billions of dollars in additional environmental and public health costs would have been incurred in the last few years and into the future. Fortunately, the inability of Congress to act was backstopped by the regulatory ratchet of the Clean Air Act that triggers a procession of regulatory initiatives based on scientific findings that have been effective in shaping investment and environmental behavior in the electricity sector.\”

\”The sulfur dioxide cap-and-trade program was intended to reduce sulfur dioxide emissions from power plants from anticipated levels of 16 million tons per year to 8.95 million tons per year by 2010. However, evidence based on integrated assessment suggests an efficient level would be just over 1 million tons per year. In the absence of legislative action, regulatory initiatives have taken effect and driven emissions from power plants to 5.157 million tons, as measured in 2010. By 2015, the Clean Air Interstate Rule and the Mercury and Air Toxics Standard will further reduce emissions to 2.3 million tons per year. In doing so, the emissions constraint under the 1990 Clean Air Act amendment has become irrelevant, and the price of those tradable emissions allowances has fallen from several hundred dollars a ton to near zero.\”

\”The sulfur dioxide cap-and-trade program is the flagship example of the use of economic instruments in environmental policy. However, since its adoption in 1990, although the sulfur dioxide trading program gets most of the credit in textbooks, more than half of the emissions reductions that have and will occur are due to regulation.\”

 In short, a scheme for trading pollution permits only works because it sets a firm limit on how much pollution can be emitted–often a limit that is declining over time–and then allows trading of permits within that limit. But if faster reductions in the level of pollution seem useful or cost-effective, perhaps because of changes in the market or in scientific information, the pollution quota often can\’t be changed. Moreover, the idea that the pollution limit might be adjusted up or down in the future would make it hard for a market of pollution permits to operate.

Burtraw points out  that a similar dynamic applies to reducing carbon emissions. Back in 2009, the Waxman-Markey bill that would have set up a cap-and-trade system for carbon emissions passed the House of Representatives but failed in the Senate. At the time, supporters of the bill made some dire predictions about how carbon emissions would increase as a result. But here\’s the unexpected aftermath. Back at the time of Waxman-Markey, the goal was to reduce carbon emissions by about 10% by 2020, relative to 2005 levels.

However, when Waxman-Markey failed, other events happened. California has imposed rules to limit carbon emissions, along with some other states. The Environmental Protection Agency has looked for ways under existing rules to reduce carbon emissions. The new rules requiring higher fuel economy will reduce carbon emissions. And the advent of more plentiful and cheaper natural gas will reduce carbon emissions. Thus, Burtraw writes: \”Total reductions [in carbon dioxide emissions] by 2020—accounting for changes due to subnational policy, regulatory actions under the Clean Air Act, and advantageous secular trends—are on track to yield emissions reductions of 16.7 percent relative to 2005 levels. The anticipated emissions reductions under the Clean Air Act regime exceed those reductions within the United States that would have occurred under cap and trade.\”

Again, if cap-and-trade legislation had passed back in 2009, it would have set an overall limit on carbon emissions. With that limit in place, any other changes–like cheaper natural gas or higher fuel efficiency standards for cars–would have just made it easier to meet the pollution limit in the cap-and-trade standard. Those unexpected gains in reducing carbon emissions would probably just have led to less need to reduce carbon emissions in any other way.

None of this means that tradeable pollution permits are a bad idea. After all, they were effective in reducing S02 emissions in a cost-effective manner in the 1990s, as well as reducing lead emissions in the 1980s. But when there are ongoing changes in the economic factors affecting the magnitude of emissions, the technology for reducing emissions, and the scientific evidence about the cost of emissions, setting a specific limit on the quantity of pollution allowed may be even harder than it looks.

Why GDP Growth is Good

Most teachers of economics at some point have to address the existential question from students: Is more output always good? Nicholas Oulton does has a nice punchy essay called \”Hooray for GDP!\”, written as an \”Occasional paper\” for the Centre for Economic Performance at the London School of Economics and Political Science. Oulton summarizes the main arguments against focusing on GDP in this way:

  1.  GDP is hopelessly flawed as a measure of welfare. It ignores leisure and women’s
    work in the home. It takes no account of pollution and carbon emissions.
  2. GDP ignores distribution. In the richest country in the world, the United States, the
    typical person or family has seen little or no benefit from economic growth since the
    1970s. But over the same period inequality has risen sharply.
  3. Happiness should be the grand aim of policy. But the evidence is that, above a certain level, a higher material standard of living does not make people any happier. …
  4. Even if higher GDP were a good idea on other grounds, it’s not feasible because the
    environmental damage would be too great.

Oulton then addresses each question, not attempting any kind of exhaustive review, but by providing a selective sampling of the arguments and evidence. Here are some of  his answers, mixed with my own.

1. GDP is flawed as a measure of welfare. 

 Yes, GDP leaves out a lot that matters, and a lot that should matter. There\’s no surprise in this: Every intro econ textbook for decades has taught this point. My favorite quotation on this point from a 1968 speech by Robert Kennedy.

 Oulton makes the useful distinction that GDP is a measure of output that is not and was never intended to be a measure of welfare, but that per capita GDP is clearly a component of welfare–that is, when one makes a list of all the factors that benefit people, a higher level of consumption of a wide range of goods and services is an item on that list. In addition, per capita GDP is a broader indicator of welfare because looking around the world, GDP is clearly broadly correlated with health, education, democracy, and the rule of law.

For thinking about social welfare, it is often useful to look at statistics other than GDP. For example, here\’s one of my earlier posts about economists attempting to estimate \”Household Production: Levels and Trends.\”

My own favorite comment on this point is from a 1986 essay by Robert Solow (\”James Meade at Eighty,\” Economic Journal, December 1986, 986-988), where he wrote: \”If you have to be obsessed by something, maximizing real National Income is not a bad choice.\” At least to me, the clear implication is that it\’s perhaps better not to be obsessed by one number, and instead to cultivate a broader and multidimensional perspective. But yes, if you need to pick one number, real per capita GDP isn\’t a bad choice. To put it another way, a high or rising GDP certainly doesn\’t assure a high level of social welfare, but it makes it easier to accomplish those goals than a low and falling GDP.

2) GDP ignores distribution. 

Yes, it does. Again, GDP is a measure of output, not of everything that can and should matter in thinking about society. I\’ve often noted on this website that inequality of wages and household incomes has been rising in recent decades, and that I believe this trend is a genuine problem.

But even though high and rising inequality is (I believe) a problem, that doesn\’t mean that high or rising GDP is the cause of the problem It\’s not at all clear that being in an economy with a higher level of GDP leads to more inequality. From a global perspective, many economies with the greatest level of inequality are in Latin America or in Africa. Many high-income countries in western Europe have much greater equality of incomes than the U.S. economy. Periods of rapid economic growth in the U.S. economy–say, back in much of the 1950s and the 1960s–were not associated with rising inequality.

Oulton writes: \”Inequality concerns are real but there is still a case in my view for separating questions of growth from questions of distribution.\” In my own mind, this analytical distinction started in earnest (although I\’m sure there were predecessors) with John Stuart Mill\’s classic 1848 text, Principles of Political Economy, where the first major section of the book is about \”Production\” and the second major section is about \”Distribution.\” In Mill\’s \”Autobiography,\” he writes that  he came to appreciate this distinction, and indeed to view it as one of the central distinguishing features of his book, as a result of discussions with his wife, Harriet Taylor Mill. Mill wrote:

\”The purely scientific part of the Political Economy I did not learn from her; but it was chiefly her influence that gave to the book that general tone by which it is distinguished from all previous expositions of political economy that had any pretension to being scientific…. This tone consisted chiefly in making the proper distinction between the laws of the Production of wealth—which are real laws of nature, dependent on the properties of objects—and the modes of its Distribution, which, subject to certain conditions, depend on human will.\”


3) Happiness should be the grand aim of policy. 

The question here, of course, is how \”happiness\” is judged. It\’s true that on surveys which ask people to rank how happy they are on a scale from 1-10, the happiness level of people in high-income countries isn\’t much higher than a few decades ago. There is an ongoing argument over how to interpret these results. Is happiness really \”positional\”–that is, I judge my happiness relative to others at the same time, and so if everyone has more consumption, happiness doesn\’t rise? Are these kinds of survey results an artefact of the survey itself: that is, someone who answers that they are \”7\” on the happiness scale in 2010 isn\’t saying that they would also be a \”7\” on the happiness scale if they had a 1970 level of income. Here\’s a post from last May on the connections from economic growth to survey questions about happiness, with some emphasis on how it applies in China.

My sense is that most people actually get a lot of happiness from the goods and services of a modern economy, and they would not be equally happy if those goods and services were unavailable. Oulton makes an interesting argument here that there is a battle between process innovation and product innovation.  If both process innovation and product innovation rise together, then people have higher productivity and incomes, and happily spend those incomes on the new products that are available. If process innovation rises quickly, but product innovation does not, then people would have higher productivity and incomes, but nothing extra to spend them on–and thus might opt for much more leisure. Oulton has a nice thought experiment here:

\”Imagine that over the 220 or so years since the Industrial Revolution began in Britain process innovation has taken place at the historically observed rate but that there has been no product innovation in consumer goods (though I allow product innovation in capital goods). UK GDP per capita has risen by a factor of about 12 since 1800. So people today would have potentially vastly higher incomes than they did then. But they can only spend these incomes on the consumer goods and services that were available in 1800. In those days most consumer expenditure was on food (at least 60% of the typical family budget), heat (wood or coal), lighting (candles) and clothing (mostly made from wool or leather). Luxuries like horse-drawn carriages were available to the rich and would now in this imaginary world be available to everyone. But there would be no cars, refrigerators, washing machines or dishwashers, no radio, cinema, TV or Internet, no rail or air travel, and no modern health care (e.g. no antibiotics or antiseptics). How many hours a week, how many weeks a year and how many years out of the expected lifetime would the average person be willing to work? My guess is that in this imaginary world people would work a lot less and take a lot more leisure than do real people today. After all, most consumer expenditure nowadays goes on products which were not available in 1800 and a lot on products not invented even by 1950.\”

Of course, over the last century or so workweeks have gotten considerably shorter, and in that sense, people have chosen to take some of the rewards of process innovation in the form of more leisure. But most people prefer to follow a path where they can earn sufficient income to enjoy the results of product innovation. As I like to point out, the modern economy offers a fair amount of freedom in terms of work choices.  Throughout their lives, people often have a choice about whether they will choose to follow a job path that is less demanding in time and energy, but also provides lower income. Some people seek out such choices, but most do not.

4) GDP and the costs of environmental damage. 

Oulton quotes from a 2012 Royal Society report that is concerned about overpopulation and a sustainable environment. He writes: \”In its preferred scenario GDP per capita is equalised across the world at $20,000 in 2005 PPP terms by 2050 (Report, page 81). The UK’s GDP per capita in 2005 was $31,580 in 2005 PPPs so this would imply a 37% cut. When they think about economic growth natural scientists tend to think about biological processes, say the growth of bacteria in a Petri dish. Seed the dish with a few bacteria and what follows looks like exponential growth for a while. But eventually as the bacteria cover most of the dish growth slows down. When the dish is completely covered growth stops. End of story.\”

 Of course, the world economy isn\’t a petri dish, and people aren\’t bacteria. Economist have been drawing up models of economic growth with fixed amounts of land or minerals, or where economic activities emit pollution, for some decades now. Oulton summarizes the basic lesson: \”These models all have in common the result that perpetual exponential growth is possible provided that technical progress is sufficiently rapid.\”

In other words, it\’s certainly possible to draw up a disaster scenario where resource or environmental limitations lead to grief at a global level. It\’s also possible that with a combination of investments in technology and human capital, economic growth can at least for a considerable time overcome such limitations. For an example of analysis along these lines, the United Nations has put out the first of what is intended to be a series of reports on how changes in different types of capital can offset each other (or not), which I posted about in \”Sustainability and the Inclusive Wealth of Nations.\”

As Oulton notes, the practical question here is not whether resource and environment limits must eventually bind at some distant point in the future, \”but only whether it makes sense to advocate growth over the next 5, 10, 25, 50 or 100 years.\” 

In the U.S. economy, 15% of the population is below what we call the \”poverty line,\” and their life prospects are diminished as a result. About 2.5 billion people in the world live on less than $2/day
 I do not see a practical way of raising the standard of living for these people, or for their children, unless rising GDP plays a central role.

Does Big-Time Football Reduce College Grades?

 The University of Oregon Ducks football team is undefeated and ranked second in the country after beating Washington last weekend by a score of 52-21. But three economists from the University of Oregon–Jason M. Lindo, Isaac D. Swensen, and Glen R. Waddell–are using data from their school to ask \”Are Big-Time Sports a Threat to Student Achievement?\” Their analysis appear in the American Economic Journal: Applied Economics 2012, 4(4): 254–274. The journal isn\’t freely available on-line, but many in academia will have access through library subscriptions.

Here is the approach they take: \” Our primary source of data is University of Oregon student transcripts, covering all undergraduate classes administered from fall quarter of 1999 through winter
quarter of 2007. … We combine these data with readily available reports of the football team’s win-loss records … Over our sample period, the winning percentage is 69.7 percent, on average, and varies from 45.5 percent to 90.9 percent.\” Because the researchers have data on individual students, they can make a statistical comparison of how the grade point average for an individual student changes from year to year, and see if it is correlated with the winning percentage of the football team.  They can also do a number of other calculations, like adjusting for a time trend so that grade inflation is taken into account, as well as looking at how responses differ by gender, by income level (measured by which students are receiving financial aid), and by test scores before entering the university.

There\’s one additional element of complexity here: In most college classes, grades are given according to some explicit or implicit \”curve\”: that is, even if the academic performance of all students was worse one fall in absolute terms, if a certain percentage of students get As, Bs, Cs, and so on, then grade point average might not show the drop in absolute level of performance. This suggests two kinds of comparisons: 1) male students in their data are more likely to watch football than female students, so one can look at the how the grade gap between male and female students is related to the winning percentage of the football team; and 2) one can compare fall grades when the football team is playing to winter/spring term grades. Lindo, Swensen, and Waddell summarize their results this way:

\”That is, our preferred estimates are based on considering how a student’s grades deviates from his or her own average grades as the winning percentage varies from its average, and then how this response varies across gender. With our analysis we show that male grades fall significantly with the success of the football team, both in absolute terms and relative to females. There is also pronounced heterogeneity among students, suggesting that the impact is largest among students from relatively disadvantaged backgrounds and those of relatively low ability. …

\”Relative to females, males report being more likely to increase alcohol consumption, decrease studying, and increase partying around the success of the football team. Yet, both male and female students report that their behavior is responsive to athletic success. This suggests that female performance is likely affected by the performance of the football team as well, but that this effect is masked by grade curving. … [A] 25 percentage point increase in the football team’s winning percentage will increase the gender gap in GPAs … by 8.5 percent.\”

After comparing fall and winter academic terms, \”only in the quarter we associate with football—the fall quarter—is there movement in the gender gap in academic  performance that varies systematically with athletic success.\”

I\’ll spare you a homily on the \”true purpose\” of higher education, and the extent to which big-time sports supports or undermines that purpose. But for those who hold the misapprehension that college sports provide a financial subsidy to the academic programs of these institutions, Lindo, Swensen, and Waddell toss out one cold fact: \”In 2010, 211 out of 218 Division I athletics departments at universities subject to open records laws received a subsidy from their student body or general fund. These subsidies are substantial and rapidly growing. From 2006 to 2010, the average subsidy increased 25 percent, to $9 million.\”