The Role of Safe Assets in a Financial System

Gary Gorton, Stefan Lewellen, and Andrew Metrick presented \”The Safe-Asset Share,\” one of those rare academic papers with a basic empirical finding that shakes up your mental landscape,  at the annual meetings of the Allied Social Science Associations a couple of weeks ago in Chicago. Here is their opening (citations and footnotes omitted):

\”Over the past sixty years, the total amount of assets in the United States economy has exploded, growing from approximately four times GDP in 1952 to more than ten times GDP at the end of 2010. Yet within this rapid increase in total assets lies a remarkable fact: the percentage of all assets that can be considered “safe” has remained very stable over time. Specifically, the percentage of all assets represented by the sum of U.S. government debt and by the safe component of private financial debt, which we call the “safe-asset share”, has remained close to 33 percent in every year since 1952.\”

The dynamics of the safe-asset share are important for economists, policymakers, and regulators to understand because “safe” debt plays a major role in facilitating trade. … Most financial-sector debt has the primary feature that it is information-insensitive, that is, it is immune to adverse selection in trading because agents have no desire to acquire private information about the current health of the issuer. Treasuries, Agencies, and other forms of highly-rated government debt also have this feature. To the extent that debt is information-insensitive, it can be used efficiently as collateral in financial transactions, a role in finance that is analogous to the role of money in commerce. Thus, information-insensitive or “safe” debt is socially valuable. Importantly, the stability of the safe asset share implies that the demand for information-insensitive debt has been relatively constant as a fraction of the total assets in the economy. Given the rapid amount of change within the economy over the past sixty years, the relatively constant demand for safe debt suggests an underlying transactions technology that is not well understood.\”

 Here\’s figure showing the safe asset share over time: 

However, the composition of these safe assets has shifted dramatically in recent decades. It used to be mainly bank deposits, but it has now become mainly private securities. They write: \”The figure shows that bank deposits were near 80 percent of the total through the 1950s and 1960s, and remained as high as 70 percent as late as 1978. This percentage then began a steep 30-year decline, with the rise of money market mutual funds, broker-deal commercial paper, securitized debt from GSEs, and other asset-backed securities. On the eve of the financial crisis, the share of bank deposits had fallen to 27 percent. At the end of 2010, it stood at a little less than 32 percent.\”

Having documented the pattern, they end their paper with more questions than answer: \”[W]e currently know very little about the demand for and supply of “safe” debt. While we hope that our work is a start in the right direction, our paper raises a number of important questions. Why is the safe-asset share constant? Did the demand for safe assets play a role in the rise of the shadow banking system? What is the underlying transactions technology that relates the safe asset share to the rest of the economy? We hope that these and other questions regarding safe debt will be addressed through future research.\”

However, there is a bit more to say here. Gorton in particular has been thinking through the role of safe assets in an economy and a financial system for some time. For example, the subject came up in an interview he did with the Region magazine, published by the Federal Reserve Bank of Minneapolis in December 2010. Here\’s Gorton from that interview, on how a \”safe asset\” can be conceived of as an asset that is insensitive to information, and how when that an asset thought to be safe becomes sensitive to information, a financial crisis can result:

\”Global financial crises are about debt. About debt. But, obviously, we need to have a theory of debt to understand why people would use a security, bank debt, and how that could lead to a crisis. … In my work with Tri Vi Dang and Bengt Holmström, we develop this idea, that you mention, of the optimality of debt arising from its information insensitivity. Roughly speaking, the argument for the optimality of debt is simply that it’s easiest to trade if you’re sure that neither party knows anything about the payoff on the debt. …

That intuitive logic applies to repo as well. Nobody wants to be given collateral that they have to worry about. And the mechanics of how repo works is exactly consistent with this. Firms that trade repo work in the following way: The repo traders come in in the morning, they have some coffee, they go to their desks, they start making calls, and in a large firm they’ve rolled $40 to $50 billion of repo in an hour and a half. Now, you can only do that if the depositors believe that the collateral has the feature that nobody has any private information about it. We can all just believe that it’s all AAA.

This is a feature of an economy that is fundamental. It is fundamental that you have these kinds of bank-created trading securities. And the fact that it’s fundamental and that you need these is not widely understood in economics.  …

The way standard models deal with it is, I think, incorrect. A lot of macroeconomists think in terms of an amplification mechanism. So you imagine that a shock hits the economy. The question is: What magnifies that shock and makes it have a bigger effect than it would otherwise have? That way of thinking would suggest that we live in an economy where shocks hit regularly and they’re always amplified, but every once in a while, there’s a big enough shock … So, in this way of thinking, it’s the size of the shock that’s important. A “crisis” is a “big shock.”

I don’t think that’s what we observe in the world. We don’t see lots and lots of shocks being amplified. We see a few really big events in history: the recent crisis, the Great Depression, the panics of the 19th century. Those are more than a shock being amplified. There’s something else going on. I’d say it’s a regime switch—a dramatic change in the way the financial system is operating. This notion of a kind of regime switch, which happens when you go from debt that is information-insensitive to information-sensitive is different conceptually than an amplification mechanism.\”

 In a post last June 17, I quoted Ricardo Caballero about \”Demand for Safe Assets in an Financial Crisis.\” He argues that the world economy as a whole, with the rise of economies in Asia in particular, is suffering from a shortage of safe assets, that the financial sector tried to manufacture the desired safe assets out of mortgage-backed securities, and that when these assets were clearly seen to be not safe [Gorton would say they switched from being information-insensitive to being information-sensitive], the crisis erupted.

This story of how safe assets relate to the financial system and to the possibility of crisis is still being fleshed out. But to misquote Gertrude Stein, \”There is a there there.\”

Consumer Price Index vs. Personal Consumption Expenditures Index

 The Consumer Price Index (CPI) from the Bureau of Labor Statistics is the measure of inflation that gets the most attention, both from the media and in most intro econ classrooms. But I\’m thinking that the Personal Consumption Expenditures (PCE) index measure of inflation should start to get equal or perhaps even greater attention.

For starters, I hadn\’t known–although I probably should have known–that when the Federal Reserve looks at rates of inflation, it focuses more on PCE than on CPI. The announcement of this policy change was buried in a footnote in the Fed\’s  Monetary Policy Report to the Congress, February 17, 2000. There\’s some jargon in the quotation, which I\’ll unpack in a minute, but here\’s the comment:

\”In past Monetary Policy Reports to the Congress, the FOMC [Fed Open Market Committee] has framed its inflation forecasts in terms of the consumer price index. The chain-type price index for PCE draws extensively on data from the consumer price index but, while not entirely free of measurement problems, has several advantages relative to the CPI. The PCE chain-type index is constructed from a formula that reflects the changing composition of spending and thereby avoids some of the upward bias associated with the fixed-weight nature of the CPI. In addition, the weights are based on a more comprehensive measure of expenditures. Finally, historical data used in the PCE price index can be revised to account for newly
available information and for improvements in measurement techniques, including those that affect source data from the CPI; the result is a more consistent series over time. This switch in presentation notwithstanding, the FOMC will continue to rely on a variety of aggregate price measures, as well as other information on prices and costs, in assessing the path of inflation.\”

So what is all this stuff about a \”PCE chain-type index\” compared to the \”fixed weight nature of the CPI, and how the PCE is \”a more comprehensive measure of expenditures,\” and how \”historical data in the PCE price index can be revised? For explanations of these differences, Phil Davies of the Minneapolis Fed offers a nice discussion of the CPI and PCE, along with much description of how price indexes have evolved over time, in \”Taking the Measure of Prices and Inflation,\” appearing in the December 2011 issue of the Region. For a FAQ page at the Bureau of Economic Analysis website comparing the CPI and PCE, look here. For a more detailed discussion of differences, see this article in the November 2007 Survey of Current Business.

As a starting point, here\’s a graph showing the difference between the CPI and the PCE, showing annual percentage changes in inflation according to each measure. Clearly, the difference between them isn\’t large. But just as clearly, the rate of inflation tends to be a little lower when using the PCE (the blue line is more often below the red than not(, and the rate of deflation in 2009 was a little smaller using the PCE, too.

There are four ways in which PCE differs from the CPI. I\’ll use the names the BEA gives them:

1) The scope effect. The two indexes cover similar but not identical categories of personal spending. Here\’s Davies from the Minneapolis Fed: \”[The PCE measures a broader swath of personal consumption than the CPI. For instance, the PCE captures expenditures by rural as well as urban consumers and includes spending by nonprofit institutions that serve households. And while the CPI records only out-of-pocket spending on health care by consumers, the PCE also tracks personal medical expenses paid by employers and federal programs such as Medicare. However, over 70 percent of the price data in the PCE is drawn from the CPI.\”

2) The weight effect. When combining all sorts of individual prices into an overall price index, those categories where people spend more get greater weight than those categories where people spend less. But the PCE and CPI  use different weights. Davies explains: \”The CPI reflects reported consumption in the Consumer Expenditure Survey, conducted for the BLS by the U.S. Census Bureau. To determine its expenditure shares, the PCE relies on business surveys such as the Census Bureau’s annual and monthly retail
trade surveys. Shelter accounts for the biggest difference in weighting between the two indexes; the share of personal spending devoted to housing is larger in the CPI because nonshelter expenditures in the CES are less than those estimated from business surveys.\”

3) The formula effect.  Here\’s the simple way to think about a price index: Identify a basket of goods, which has a certain quantity of each good in the basket, which represents consumption for a typical household. Calculate what it would take to buy that basket of goods at one date, and then calculate what it would take to buy the basket of goods at year later. Prices of some  individual goods will rise  while others will fall, but the change in the cost of the total basket of goods will be the amount of inflation.  For a long time, the Consumer Price Index was calculated with this \”basket of goods\” approach.

But as economists have long pointed out, this \”basket of goods\” approach oversimplifies in a way that surely overstates the true rise in the cost of living. There are two classic difficulties. One is that the basket of goods which represents consumption at the earlier time will not include newly created goods or improvements in the quality of goods that are available to people at the later time. Second, the basket of goods at the earlier time only made sense because it reflected prices at that time. Imagine that inflation was zero overall in a given year, but  prices shifted about so that people will naturally adjust their consumption , buying less of what is relatively more expensive and more of what is relatively cheaper. Thus, the basket of goods at the earlier time won\’t be a fair representation of what households would buy in a different configuration of prices at the later time.

There are at least partial solutions to these problems, but they increase the level of complexity in the calculation. For example, those who put together the \”basket of goods\” now rotate continuously what is included in the basket, so that as new goods are invented and old ones improved, they can enter the basket of goods being measured. In addition, they use mathematical formulas which take into account how people substitute between goods that relatively are more or less expensive, so that the pure effect of overall inflation in the price level can be separated out. While the CPI uses a mathematical formula that allows for some substitution in response to changes in relative prices (the \”fixed weight\” to which the Fed was referring in 2000), the PCE uses a formula that allows for a greater degree of substitution (the \”chain-type index\” to whih the Fed was referring in 2000).

4) Other effects.  There are some other minor differences in the CPI and PCE indexes, including how they do seasonal adjustments, how they treat airline fares, and some other issues.

Finally, one remaining difference between the CPI and the PCE is that the CPI, once published, is not revised. The CPI is used for contracts and for legislation–like adjusting Social Security benefits. The data series of the CPI over time was calculated using evolving methodologies,and when the way in which the CPI is calculated changes, no one goes back and recalculates historical CPI figures. The PCE is not tied up in such issues, and like the GDP estimates themselves, it is revised and altered over time, as adjustments are made to data and methods.  When the methods for calculating PCE are revised, these methods are then applied to all of the historical data as well, so that the PCE shows you a measure of how inflation affects personal expenditures over time, using a common methodology.

It may seem highly unlikely to think about teaching and using the PCE, rather than the CPI, as a measure of inflation. But remember a few decades ago when it seemed highly unlikely to think about moving from GNP to GDP–and now GDP is used almost all of the time.

Eliminating the Statistical Abstract?

I just clicked over to look up some numbers in the U.S. Statistical Abstract, which has been one of my standard starting points for fact-finding and fact-checking since I started needing to care about actual data as part of my high school debate team back in the 1970s. As it says at the website:  \”The Statistical Abstract of the United States, published since 1878, is the authoritative and comprehensive summary of statistics on the social, political, and economic organization of the United States. Use the Abstract as a convenient volume for statistical reference, and as a guide to sources of more information both in print and on the Web.\”

It also says that the U.S. Census Bureau has decided to save $3 million per year by eliminating the Stat Abstract.

The invaluable Robert Samuelson at the Washington Post has been on top of this story, which I had utterly missed. He lamented the likely loss of the Stat Abstract in an August 21 column, and followed up with a sad blog post on October 4.  He writes: \”I’ve been covering government for more than four decades, and this is one of the worst decisions I’ve seen.\” For those of us who spend large chunks of our time trying to track down actual facts, the end of a long-standing reference work is a genuinely grim day.

I found myself hoping that this this is just the Census Bureau\’s version of the infamous \”Washington Monument strategy.\” Back in 1969, a director of the National Park Service named George Hartzog responded to proposed budget cuts for his agency by closing all the national parks for two days a week–including the Washington Monument. (A short biography of Hartzog is here.) Public outrage over these highly visible changes led Congress to restore the funding. Ever since then, when a government agency responds to a tightened budget by cutting its most publicly visible functions, it\’s been called a \”Washington Monument\” strategy.

But the Washington Monument strategy only works if the public cares, and when it comes to the Statistical Abstract, even I am not delusional enough to believe that it does. 

What makes the elimination of the Stat Abstract even more annoying is that by government standards, it\’s dirt-cheap. Federal spending for 2012 is projected to be about $3.7 trillion. If I haven\’t slipped a decimal point in my calculations, this rate of spending works out to about $7 million per minute, 24 hours a day, 365 days per year. At $3 million, the Stat Abstract provides one-stop access to a vast range of statistics and sources, at a cost of less than a half-minute per year of federal spending. When you look at government websites, you see a vast array of cheery stories about how the government is there to help you, with photos of smiling people and shiny equipment. Government always seems to have money for public relations and self-promotion. But apparently not for offering the public a genuinely useful, if admittedly dry, collection of actual nonpartisan data in a volume with a history of more than 130 years.  

For those of us who each year spend some time skimming through the Analytical Perspectives volume of the Federal Budget of the United States (welcome to my life!), there is always a section on \”Strengthening Federal Statistics.\” Here\’s the opening paragraph:

\”Federal statistical programs produce key information to illuminate public and private decisions on a range of topics, including the economy, the population, agriculture, crime, education, energy, the environment, health, science, and transportation. The share of budget resources spent on supporting Federal statistics is relatively
modest—about 0.04 percent of GDP in non-decennial census years and roughly double that in decennial census years—but that funding is leveraged to inform crucial decisions in a wide variety of spheres. The ability
of governments, businesses, and the general public to make appropriate decisions about budgets, employment, investments, taxes, and a host of other important matters depends critically on the ready availability of relevant, accurate, and timely Federal statistics.\”

Here\’s a table showing the main federal statistical agencies and their level of spending, in millions of dollars. The U.S. Census Bureau spent a lot in 2010, because of the Census, but is back to more normal levels by now.

Can\’t a few Senators and Congressmen take time out from earmarking pork barrel projects for their home districts and raising campaign contributions, and resurrect the Statistical Abstract? It\’s a bad civics lesson to have this volume disappear.

Certificate Programs for Labor Market Skills

President Obama and many others have called for a dramatic increase in the number of U.S. students obtaining four-year and community college degrees. It\’s a popular goal, and easy to announce, but frankly, quite unlikely. Higher education as currently constituted is extremely expensive, and neither the federal government, nor state government, nor prospective students are flush with the needed funds. In addition, many of those not currently attending higher education aren\’t prepared to flourish in that setting, whether because of lack of academic preparation, lack of interest, or both.

 Bruce Bosworth lays out the problem and limns a possible pathway in \”Expanding Certificate Programs\” in the Fall 2011 issue of Issues in Science and Technology. Here are some excerpts:

The underlying problems of stagnating workforce skills and the unlikeliness of college enrollment expanding quickly enough.

\”Given current trends, the nation can expect little gain in the educational attainment of the workforce by 2040, at least as a consequence of young adults moving into and through the labor force. Older workers (ages 35 to 54) are now as well educated as younger workers (ages 25 to 34), especially in the percentage with at least a high-school degree, but also in the percentage with some postsecondary attainment. Thus, there will be no automatic attainment gain over the next several decades as current workers age and older workers leave the labor force. In fact, without some big changes in educational patterns, it is probable that the newer workers entering the workforce will have lower levels of attainment than the older workers leaving. Workforce attainment levels will stagnate or decline, and future economic growth will slow as a consequence.\”

\”In the face of these trends, President Obama proposed to Congress in 2009 that “by 2020, America will once again have the highest proportion of college graduates in the world.” … According to evaluations led by the National Center for Higher Education Management Systems, retaking international leadership would require U.S. college attainment rates to reach 60% in the cohort of adults ages 25 to 40. But in 2008, only 37.8% of this age group had degrees at the associate’s level or higher, and at present rates of growth, this figure would increase to only 41.9% by 2020. Closing the gap will require a 4.2% increase in degree production every year between 2008 and 2020.\”

Certificate programs are growing quickly.

\”Certificate programs take a variety of forms nationwide. They are offered by two-year community colleges, by four-year colleges, and, increasingly, by for-profit organizations. Programs vary in duration, falling into three general categories, with some requiring less than one academic year of work, some at least one but less than two academic years, and some requiring two to four years of work. The programs collectively awarded approximately 800,000 certificates in 2009, up more than 250% from the roughly 300,000 certificates awarded in 1994. Across all programs, awards are heavily skewed toward health care, which represented 44.1% of all certificates awarded in 2009.\”

A Florida study suggests that certificate programs are paying off, especially for students who don\’t traditionally attend college.

\”A study of educational and employment outcomes for students in Florida also has suggested that certificate programs, in addition to leading generally to good economic outcomes for completers, may have particular advantages for students from low-income families. The study drew from a longitudinal student record system that integrates data from students’ high-school, college, and employment experience. It followed two cohorts of public-school students who entered the ninth grade in 1995 and 1996.
\”The research suggested that strong earnings effects of degree attainment (associate’s, bachelor’s, and advanced) were largely confined to students who had performed well in high school. They were continuing in postsecondary study a trajectory of success apparent in high school. However, the research found that obtaining a certificate from a two-year college significantly increased the earnings of students who did not necessarily perform well in high school, relative to those who attended college but did not obtain a credential. These students were finding new success in certificate programs, changing the trajectory of their high-school years. Moreover, the study confirmed other research that found strong returns to completion of good certificate programs, even relative to associate’s degree completers.

Across all certificate programs, the field of study is an important predictor of earnings outcomes. In some fields, individuals who complete long-term certificates make as much money, on average, as those who complete associate’s degree programs. This seems to be because certificate completers pursue and earn awards in fields with relatively high labor market returns and then take jobs where they can realize those returns. Many individuals who gain associate’s degrees do not go on to higher attainment, and a significant number of them hold majors in areas that offer limited labor market prospects for job seekers with less than a bachelor’s degree.\”

The Tennessee model of certificate programs
\”Tennessee provides a clear example of what is possible and of what works. Tennessee has a statewide system of 27 postsecondary institutions that offer certificate-level programs serving almost exclusively nontraditional students. The Tennessee Technology Centers began as secondary-level, multidistrict, vocational technical schools in the 1960s under the supervision of the State Board of Education and began to serve adults in the 1970s. In most states, analogous institutions were merged into community- or technical-college systems, but in Tennessee (as in a few other states) they continue to operate as discrete non–degree-granting postsecondary institutions.
The technology centers award diplomas for programs that exceed one year in length, as well as certificates for shorter programs. Diploma programs average about 1,400 hours and some extend to more than 2,000 hours. They are designed to lead immediately to employment in a specific occupation. In 2008–2009, the centers enrolled roughly 12,100 students, and they awarded 4,696 diplomas and 2,066 certificates. Collectively, the centers offer about 60 programs, some just at the shorter-term certificate level but most at the longer-term diploma level. Some of the more popular diploma programs are Practical Nursing, Business Systems Technology, Computer Operations, Electronics Technology, Automotive Service and Repair, CAD Technology, and Industrial Maintenance.

Most students in the centers are low-income, with nearly 70% coming from households with annual income of less than $24,000 and 45% from households with annual income of less than $12,000…. The average age of the students is 32 years …  According to 2007 IPEDS data, 70% of full-time, first-time students in the centers graduated within 150% of the normal time required to complete the program. Every year for the past several years, at least 80% and sometimes as many as 90% of students who completed the program found jobs within 12 months in a field related to their program. …

A growing consensus in Tennessee holds that the key explanation for the centers’ high completion rates can be found in the program structure. The centers operate on a fixed schedule (usually from 8:00 a.m. to 2:30 p.m., Monday through Friday) that is consistent from term to term, and there is a clearly defined time to degree based on hours of instruction. The full set of competencies for each program is prescribed up front; students enroll in a single block-scheduled program, not individual courses. The programs are advertised, priced, and delivered to students as integral programs of instruction, not as separate courses. Progression though the program is based not on seat time, but on the self-paced mastery of specific occupational competencies. …  The centers also build necessary remedial education into the programs, enabling students to start right away in the occupational program they came to college to pursue, building their basic math and language skills as they go, and using the program itself as a context for basic skill improvement.\”

The U.S. economy needs to build bridges from those who perform near the median and lower in high school to at least somewhat skilled jobs in the workforce.  I\’m sure there are other promising ideas besides certificate programs. For example, I posted last October 18 on \”Apprenticeships for the U.S. Economy,\”
and last November 3 on \”Recognizing Non-formal and Informal Learning.\” But trying to push most or many of these median-and-below high school students through a conventional higher education degree is not likely to work well, and would be extremely expensive. Time to start experimenting with policies that could offer a better ratio of benefits to costs. 

New Fed Nominee Jeremy Stein, Rethinking Monetary Policy

Jeremy Stein is always worth reading, but when President Obama nominated him in late December for a seat on the Federal Reserve Board of Governors, he became must-reading. In the latest issue of the
American Economic Journal: Macroeconomics (2012, 4(1), pp. 266–282), Stein and co-author Anil Kashyap discuss \”The Optimal Conduct of Monetary Policy with Interest on Reserves.\”  In particular, they are thinking about how it might be possible to use monetary policy for two purposes: both in its traditional role of keeping inflation low and stimulating the economy in recessions, and also in an untraditional role of reducing the chance of future financial crises. The article isn\’t freely available on-line, although many students and faculty will have access to it through library subscriptions or membership in the American Economic Association.

When teaching the basics of monetary policy a few years ago, the emphasis was on how the Fed used \”open market operations\”–that is, buying and selling government bonds to banks–to make interest rates rise or fall. When the Fed bought bonds from banks, then the banks had more cash to lend out, and interest rates would fall. When the Fed sold bonds to band, and received cash from the banks, then the banks had less cash to lend out, and interest rates would rise. Through these open market operations, the Fed reacted to risks of higher inflation or economic slowdown, adjusting the lendable funds available to banks to achieve its desired level of interest rates.

This basic exposition pointed out that in theory the Federal Reserve had other policy tools, like adjusting the level of reserves that banks were required to hold with the Fed, but because those tools received little use in recent decades, little attention was paid to them.

But when the financial crisis hit in fall 2008, the Fed got a new policy tool: it can pay interest on the reserves that banks are require to hold at the Fed. Kashyap and Stein explain: \”In October of 2008, the US Federal Reserve announced that it would begin to pay interest on depository institutions’ required and excess reserve balances, having just been authorized by Congress to do so. The Fed thereby joined a large number of other central banks that were already making use of interest on reserves (IOR) prior to the onset of the global financial crisis. Given the Fed’s current policy of keeping the federal funds rate near zero, IOR has not been a quantitatively important tool thus far. As of this writing, the rate being paid is only 25 basis points. However, IOR may turn out to be extremely useful going forward …\”

Thus, in the future, when the time comes for the Fed to raise interest rates, how should it do so? As Kashyap and Stein write: \”When the Fed seeks to tighten monetary policy, should it raise the rate paid on reserves, contract the quantity of reserves, or some combination of the two?\”

I had not known before reading this article that many central banks around the world have both tools available to them. Of course, because it’s a research journal of economics, Kashyap and Stein feel compelled to explain algebraic labels rather than using words: in particular, they discuss the level of interest on reserves,  which they label rIOR, and the “scarcity value of reserves,” which they label as keep ySVR and can be thought of as the interest rate that would result from the level of reserves created by the traditional system of open market operations. They write:
\”This question can be further motivated by observing the diversity of central bank practices before the financial crisis. At one extreme of the spectrum was the Federal  Reserve, which set rIOR to zero, so that any variation in the funds rate had to come from quantity-mediated changes in ySVR . At the other extreme was the Reserve Bank of New Zealand, which in July 2006 adopted a “floor system” in which reserves were made sufficiently plentiful as to drive ySVR to zero, meaning that the policy rate was equal to rIOR . And in between were a number of central banks (e.g., the ECB and the central banks of England, Canada, and Australia) which used variants of a  “corridor” or “symmetric channel” system. One approach to operating such a system is for the quantity of reserves to be adjusted so as to keep ySVR at a constant positive level (100 basis points being a common value), with rIOR then being used to make up the rest of the policy rate.\”

\”Note that these corridor systems share a key feature with the floor system used by New Zealand. In either case, all marginal variation in the policy rate comes from variation in rIOR , with no need for changes in quantity of reserves. In this sense, the pre-crisis US approach was fundamentally different from that in many other advanced economies.\”

Kashyap and Stein point out that with these two separate policy tools–that is, the new tool of interest paid on reserves and the old tool of managing the level of bank reserves–it becomes possible for a central bank to tackle two goals. \”We argue that, in general, it will be optimal for the central bank to take advantage of both tools at its disposal by varying both  rIOR [the level of interest paid by the Fed to banks on their reserves] and ySVR [the \”scarcity value\” of reserves], with the mix depending on conditions in the real economy and in financial markets. The two-tools argument begins with the premise that monetary policy may have an important financial stability role in addition to its familiar role in managing the inflation versus output tradeoff.\”

During the financial crisis, banks and other financial institutions found themselves in trouble because they had all ramped up their level of short-term debt–that is, debt which came due quite soon on a daily or monthly basis and was commonly being rolled over (and over and over) each time it came due. When the financial crisis hit, it became impossible to roll over all this short-term debt, and so many financial institutions suddenly found themselves without funding. Kashyap and Stein argue that financial institutions will often have a tendency to take on too much short-term debt from society\’s point of view, because individual financial institutions are looking only at their own finances and not taking into account the risk that if they all take on too much short-term debt, the risk of a system-wide financial crisis goes up. Thus, a way to reduce the risk of financial crisis is to put limits on bank holdings of such short-term debt.

One way to do this is to use a broad notion of \”reserve requirements.\” In theory, banks wouldn\’t just hold reserves based on the deposits from customers, but on any debt that they are depending on renewing in the short run. Kashyap and Stein explain: \”First, within the traditional banking sector, reserve requirements should in principle apply to any form of short-term debt that is capable of creating run-like dynamics, and hence
systemic fragility. This would include commercial paper, repo finance, brokered certificates of deposit, and so forth. …  Going further, given that essentially the same maturity-transformation activities take place in the shadow banking sector, it would also be desirable to regulate the shadow-banking sector in a symmetric fashion. This suggests imposing reserve requirements on the short-term debt issued by nonbank broker-dealer firms, as well as on other entities (special investment vehicles, conduits, and the like) that hold credit assets financed with shortterm instruments, such as asset-backed commercial paper and repo. Alternatively, to the extent that many of these short-term claims are ultimately held by stable value money market funds that effectively take checkable deposits, a reserve requirement could be applied to these funds.\”

Kashyap and Stein explain that central banks around the world have been using changes in the reserve requirement as a policy tool to limit bank holdings of short-term debt, and to assure that banks have a sufficient capital cushion. \”[A] number of central banks around the world use changes in reserve requirements as a key policy tool. For example, the Chinese central bank changed the level of reserve requirements six times in 2010, while moving their policy interest rate just once. … India offers another intriguing case study. Since November 2004, the Reserve Bank of India has operated a corridor system of monetary policy. In the aftermath of Lehman Brothers’ bankruptcy filing, the Reserve Bank cut reserve requirements from 9.0 percent to 5.0 percent in a series of four steps between October 2008 and January 2009….  Finally, Montoro and Moreno (2011) study the use of reserve requirements in three Latin American countries: Brazil, Colombia, and Peru. They note that central banks in these countries raised reserve requirements in the expansion phase of the most recent credit cycle, and then, like the Reserve Bank of India, cut them sharply
after the bankruptcy of Lehman Brothers. They also argue that the motivation for this approach was explicitly rooted in a financial stability objective …\”

However, Kashyap and Stein suggest that rather than varying reserve requirements to limit short-term debt of financial institutions, instead the same goal can be accomplished by using the quantity of reserves that the Fed encourages financial institutions to hold. They conclude: \”The introduction of interest on reserves gives the Federal Reserve a second monetary policy tool that, used properly, may prove helpful for financial stability purposes.By adjusting both IOR [interest paid to banks on their reserves] and the quantity of reserves in the system, the Fed cansimultaneously pursue price stability, as well as an optimal regime of regulating the
externalities created by short-term bank debt. Though to be clear, the latter would also require, in addition to the use of IOR, a significant expansion in the coverage of reserve requirements, as well as possibly an adjustment to their level.\”

Assuming that Stein makes it through the confirmation process and ends up on the Fed Board of Governors–which in a just world will happen more-or-less instantaneously–it will be interesting to see how these issues emerge. Will the Fed begin to focus on how to reduce the systemic risk of another financial crisis? Will it revive changes in reserve requirements as an active tool of monetary policy? Will the Fed\’s reserve requirement be expanded so that it is based not just on deposits but on all forms short-term borrowing, and for all financial institutions (not just banks)? Will it start to use interest paid on bank reserves as a way of moving interest rates, while controlling the quantity of bank reserves as a way of holding down on the amount of short-term debt in the financial sector? Will the central bank perhaps decide to pay different rates of interest on those bank reserve it requires from those bank reserves that are held in excess of the requirement?

Leave aside all the potential complexities here, many of which are discussed in the article, and which are certainly real enough. The most basic ways that we have thought about and taught monetary policy in the last few decades may be on the verge of change.

(Full disclosure: Jeremy Stein was a co-editor of my own Journal of Economic Perspectives from 2007-2009.)


Robert Frank on Context Externalities and Positional Goods

Romesh Vaitilingam has in informative interview with Robert Frank at the VoxEU website, recorded November 2011, posted December 23, 2011. It is in podcast form, or if you prefer, there\’s a transcript. I\’ll first offer some excerpts in which Frank explains his position, and then offer some observations and criticism. Here\’s Frank:

On Darwin\’s exposition of why competition doesn\’t always lead to socially desirable outcomes

\”I\’ve made a fearless prediction that in 100 years time, people like you and me will check Darwin\’s name when they\’re asked to fill out a survey identifying the father of modern economics. …  It will eventually be seen as an encompassing vision that includes Adam Smith\’s invisible hand theory as an interesting special case. … So the antlers of the bull elk, for example. They are primarily to help males battle successfully against other bulls for access to mates. Darwin saw that males in most vertebrate species took more than one mate if they could. The qualifier obviously is the important step because if some succeeded, that means others don\’t take any mates at all, which is the real loser slide in the Darwinian scheme of things. So of course males fight bitterly for access to females. Antlers are the weapons for that particular species. And some mutations that coded for bigger ones were strongly favored in each case. They spread quickly, the mutations accreted. Now we get animals with antlers four feet across, weighing 40 pounds. That\’s too big for bulls as a group. Antlers don\’t grow forever, that\’s true. Natural selection puts a stop to the growth. There\’s an equilibrium, but it\’s not an optimum size when viewed from the perspective of bulls as a group. They\’d much rather be half as big, because they\’re such an encumbrance when they\’re chased into a wooded area by wolves. They\’re easily surrounded and killed. If they could take a vote or put their hoof on a red button at the count of three, “all antlers shrink by half”, they\’d have compelling reasons to do that. It\’s relative antler size that matters in battle, so it wouldn\’t affect the outcome of any fight. But they\’d all be more mobile. They\’d all be better able to escape from predation by wolves. From the perspective of the bulls themselves, that would be a good thing.\”

On \”context externalities\”

\”I think more in terms of “context externalities”, I would call them. Is my car OK? Is my house OK? … They\’re not socially scarce, but people\’s evaluations of them are very heavily context dependent. Is my house OK? I lived in a two room house in Nepal when I was a Peace Corps volunteer. It didn\’t have any plumbing. It didn\’t have any electricity. The roof leaked when it rained hard. It was nonetheless a perfectly OK house in that context. If you lived in that house here in the UK, you\’d be ashamed for your friends to know where you lived. Your kids wouldn\’t want their friends to know where they lived. It would be a house that was by no stretch of imagination conceivably evaluated as being OK. It\’s just an inadequate house by the current standards. If you look at context externalities, they\’re not here or there, they\’re everywhere. They\’re more intense in some domains than others. …[O]ne of the main results of looking at the world this way is you get arms races always that focus on the categories where context matters more. And they suck resources out of the categories where context matters less. In the house and leisure example, people would work longer hours thinking they\’re going to get ahead by being able to buy a bigger house. … It\’s that kind of arms race that leads to the misallocation. That\’s why the invisible hand doesn\’t steer things to the best uses. … Now the US family, on average, spends $28,000 on a wedding. That\’s in 2009, the most recent figure I could find. In 1980 the inflation adjusted figure was $11,000. Nobody could pretend that the people getting married in 2009 were happier because of that extra spending. It was just that the people at the top spent more. That led the people just below them to spend more. There was a cascade.\”

Policy implications? 

\”I focus almost exclusively on remedies of the sort that try to make behaviors that cause harm to others less attractive to individuals by making them more expensive, usually by taxing them. That doesn\’t prohibit somebody from doing anything, so if somebody has got a really important stake in continuing to do what he\’s doing, he can but he pays the fee. … Tax harmful behavior is the mantra that I repeat again and again …. Mainly the biggest remedy is to tax consumption at a steeply progressive rate.\”

Some reactions
As Frank says, he sees what he calls \”context externalities\” everywhere. I\’m queasy about thinking of these social pressures as \”externalities,\” that is, as market failures in which a tax reflecting the social costs of the externality would improve social welfare.  Claims that social pressures are making most of us consume items or do things we otherwise would not do are of course true, as they have been for every society since the dawn of time. But implicitly claiming that people\’s \”optimal\” decisions are the ones they would make if they had no social pressure at all, and social pressures must therefore drive them away from what would have been their optimal choice, seems like an offbeat claim for an avowedly \”social science\” like economics.

After all, the range of behaviors influenced by social pressure is very large: not just conspicuous consumption, but also many other decisions in various social groups: about working, or not; focusing on finishing certain levels of education, or not; taking certain drugs, or not; worshiping in a certain church, or not; becoming a parent at a young age, or not; and many others. In all of these cases, local and social pressures probablyl cause people to alter their behavior from what they would otherwise have done. But it would seem overly broad to call these all \”context externalities\” that are potentially ripe for policy intervention.
The alternative position usually taken by economists is that people are treated as having the autonomy and individuality to form their own preferences in a context that is mysterious and largely unexamined (by economists)–but a context that includes social pressures–and then economists study how demand based on these tastes and preferences interact with supply based on technology and production in the market. If some people work harder because they want to outdo their neighbors, that\’s not usually considered a \”market failure.\” If a certain social group decides to live a highly simple life where they strip their consumption down to as little as possible, that form of social pressure isn\’t considered a \”market failure,\” either.

I do like the idea of a progressive consumption tax that Frank emphasizes, but not because of  any argument about \”context externalities.\” The emphasis on progressivity–that is, those with high incomes paying a greater share of their income in taxes–is to pursue goals of social equity. (Indeed, it seems to me that Frank\’s \”context externalities\” are best-understood as a way of saying that the marginal utility of income for those with high income levels is low, and so higher tax rates with those on high incomes are justified.) The emphasis on consumption, rather than income, is because the U.S. economy would benefit from a higher rate of saving, and a consumption tax falls only on what is consumed, not on what is saved.

For What Majors Does College Pay Off?

Anthony P. Carnevale, Ban Cheah, and Jeff Strohl of the Georgetown Center on Education and the Workforce have published \”Hard Times: College Majors, Unemployment, and Earnings.\”
The main focus of the report is to look at what people majored in in college, and then to compare unemployment and earnings between majors.  Before showing some highlights, two warnings are appropriate.

First, the data behind these figures is from 2009 and 2010. The study compares recent college graduates who are 22-26 years of age, experienced workers who are 30-54 years of age, and graduate degree holders who are limited to 30-54 years of age–all from the 2009 and 2010 data. But how things looked in 2009 and 2010 may not be a good predictor of the future: for example, recent architecture graduates were doing poorly in the aftermath of the housing market collapse in 2009 and 2010, but that wasn\’t true back in 2005. 

Second, the classic problem in thinking about benefits of education is to isolate cause and effect. The problem is that those who go on to get additional years of education may well be different in a number of ways: for example, they may have more persistence in their work habits, or they may come from a social background with higher expectations of educational achievement, or they may have higher intelligence in a way that makes it easier for them to perform well in school, or they may be better at deferred gratification, or they may have other personality types that flourish to  greater extent in an educational setting. Thus, when you see that, on average, people with a college degree have higher income than those with a high school degree, or that those who major in chemistry have higher income than those who major in English, it would be highly unwise to attribute all of the income gap just to what courses they took. If you could somehow transplant the characteristics of all the chemistry majors into English majors, and vice versa, the resulting income gaps might look quite a bit different.

So here is one figure showing unemployment rates by major, and another figure showing earnings rates by major. In each figure, there are separate marks for recent graduates, experienced graduates, and those who hold graduate degrees.

As noted earlier, one shouldn\’t overinterpret these results. But when looking at unemployment rates, along with the architects, those who  majored in humanities or in in the arts have relatively high rates, while those who had majored in health and education had relatively low unemployment rates.

When it comes to income, the highest income levels are for those who majored engineering, computer science/mathematics, life sciences, social sciences, and business. The lower income went to those majoring in arts, education, and psychology/social work.

Iceland, Ireland, and Latvia: Three Stories of Banking Crisis and Recovery

Zsolt Darvas offers a useful meditation in \”A Tale of Three Countries: Recovery After Banking Crisis,\”a December 2011 working paper for Bruegel, a Brussels-based think tank. In effect, Darvas offers a case study approach by picking three small economies that went through a broadly similar banking crisis, but reacted with different policy choices.He starts this way (footnotes and references to tables and figures omitted):
\”Three small, open European economies —Iceland, Ireland and Latvia with populations of 0.3, 4.4 and
2.3 million respectively—got into serious trouble during the global financial crisis. Behind their problems were rapid credit growth and expansion of other banking activities in the years leading up to the crisis, largely financed by international borrowing. This led to sharp increases in gross (Iceland and Ireland) and net
(Iceland and Latvia) foreign liabilities. Credit booms fuelled property-price booms and a rapid increase in the contribution of the construction sector to output – above 10 percent in all three countries. While savings-investment imbalances in the years of high growth were largely of private origin, public spending kept up with the revenue overperformance that was the consequence of buoyant economic activity. During the crisis,
property prices collapsed, construction activity contracted and public revenues fell, especially those related to the previously booming sectors. All three countries had to turn to the International Monetary Fund and their European partners for help.\”

Darvas points out that \”the crisis hit Latvia harder than any other country, and Ireland also suffered heavily, while Iceland exited the crisis with the smallest fall in employment,despite the greatest shock to the financial system.\” What policy difference across the three countries might explain this pattern?

Iceland let its currency depreciate as part of its policy response, while Ireland was locked into the euro and Latvia stayed fixed with the euro.
\”Ireland has been a member of the euro area since 1999, and therefore adjustment through the nominal
exchange rate against the euro was not an option. Latvia has had a fixed exchange rate with the euro since 2004, and Latvian policymakers chose not to exercise the option to devalue. Both Ireland and Latvia decided to embark on a so called ‘internal devaluation’, ie efforts to cut wages and prices. Iceland has a floating exchange rate. When markets started to panic and withdrew external lending, given the size of the country’s obligations, there was no choice but to let the currency depreciate. The Icelandic krona depreciated by about 50 percent in nominal terms– depreciation would have been sharper without
capital controls …\”

In Iceland, the banks were allowed to fail. In Ireland, the government assumed the liabilities of the banks. In Latvia, most banks were foreign-owned and absorbed losses.
\”In Iceland, where credit to the private sector reached 3.5 times Icelandic GDP, the combined balance sheet of banks reached an even greater number, and banks heavily borrowed from the wholesale market, the government did not have the means to save the banks. Therefore, there was no choice but to let the banks default when global money markets froze after the collapse of Lehman Brothers in September 2008….

In Ireland, the balance sheet of Irish-owned banks was 3.7 times GDP in 2007 … . The Irish government guaranteed most liabilities of Irish-owned banks. … Taxpayers’ money was used to cover bank losses above bank capital (which was wiped out) and subordinated bank bondholders (whose loss is estimated to be about
10 percent of Irish GDP in the form of retiring €25 billion subordinated debt for new debt or equity of
€10 billion). …

In Latvia about two thirds of the banking system was owned by foreign banks (mostly Scandinavian banks), which assumed banking losses and supported their Latvian subsidiaries, thereby making the lender-of-last-resort role of the Latvian central bank less relevant. … According to the ECB’s data on consolidated banking statistics, the loss incurred by foreign banks was about 5.7 percent of GDP and the loss of domestic banks about 3.6 percent of GDP by 2010 – a large amount, but well below the banking sector losses in the two other case study countries. IMF  calculated that bank support boosted the public debt/GDP ratio by about 7 percentage points of GDP by 2010.

Iceland introduced capital controls; Ireland and Latvia did not. 
\”Due to fear of further capital outflows and additional depreciation of the Icelandic krona, in late 2008 strict capital controls were introduced in Iceland. This has locked in non-resident deposits and government paper holdings in Iceland and locked out Icelandic krona assets held outside the country, in addition to prohibiting
transfers across the border by both residents and non-residents.\”

Comparing the outcomes across these three countries, Latvia\’s GDP fell 25%, with an employment drop of 17%. Ireland had a GDP decline of 10%, with a fall in employment of 13%. Iceland had the biggest shock to its financial system, but had a GDP decline of 9%, and a fall in employment of 5%.

Without overinterpreting the lessons to be learned from this three-country comparison, a few themes do emerge.

1) Latvia was fearful of allowing its currency to depreciate against the euro. But the Iceland example suggests that in time of crisis, if you have the flexibility to let your exchange rate fall, do it. Of course, Ireland was locked into the euro without such flexibility.

2) Think twice before socializing bank losses. Iceland didn\’t take this step; Ireland did. As Darvas writes: \”Little is known about what would have happened to financial stability outside Ireland in the event of letting Irish banks default, but one thing is clear: other countries have benefited from the Irish socialisation of a large share ofbank losses, which has significantly contributed to the explosion of Irish public debt.\” The result is that while Iceland and Latvia have their public debt under control, it has risen by much more in Ireland. Darvas writes: \”Before the crisis, gross government debt was below 30 percent of GDP in all three countries, but started to balloon quickly. … [B]ank support boosted Irish public debt by about 40 percent of GDP, Icelandic public debt by about 20 percent and Latvian public debt by about 7 percent. Since Iceland and Latvia gained bettercontrol over the budget deficit than Ireland – partly due to the difference in bank support –European Commission forecasts stabilisation of the debt ratio in the two countries, but in Ireland a further 20
percentage points of GDP increase is expected till 2012.\”
3) In the short run of the time during and immediately after the crisis, imposing capital controls hasn\’t seemed to hurt Iceland\’s economic recovery. But it\’s not clear when or how these controls will be loosened over time.

Of course, applying lessons from particular countries is always tricky. But economic recovery has started in all three of these countries. As Darvas writes: \”If the adjustment experiences of the three countries could be a lesson for other countries, such as the Mediterranean countries of the euro area, should be the subject of a different study.\”

Cancer Rates Continue Falling

My wife sometimes notes, with wry incredulity, that economists seem to feel as if they have a more-informed opinion than non-economists on every topic. Thus, I smiled when ran across this press release last week: American Cancer Society report finds continued progress in reducing cancer mortality.\” As they report: \”The American Cancer Society\’s annual cancer statistics report shows that between 2004 and 2008, overall cancer incidence rates declined by 0.6% per year in men and were stable in women, while cancer death rates decreased by 1.8% per year in men and by 1.6% per year in women. The report, Cancer Statistics 2012, published online ahead of print in CA: A Cancer Journal for Clinicians says over the past 10 years of available data (1999-2008), cancer death rates have declined in men and women of every racial/ethnic group with the exception of American Indians/Alaska Natives, among whom rates have remained stable. The reduction in overall cancer death rates since 1990 in men and 1991 in women translates to the avoidance of more than a million total deaths from cancer during that time period.\”

Of course, economists also believe that we have useful contributions to make to  the debate over reducing cancer. (Cue wife and children, rolling their eyes.)  In the Fall 2008 issue of my own Journal of Economic Perspectives, David M. Cutler wrote: \”Are We Finally Winning the War on Cancer?\” The article is freely available to anyone, like all issues of the JEP from the current issue back to 1994, courtesy of the American Economic Association. Here\’s the abstract:

\”President Nixon declared what came to be known as the \”war on cancer\” in 1971 in his State of the Union address. At first the war on cancer went poorly: despite a substantial increase in resources, age-adjusted cancer mortality increased by 8 percent between 1971 and 1990, twice the increase from 1950 through 1971. However, between 1990 and 2004, age-adjusted cancer mortality fell by 13 percent. This drop translates into an increase in life expectancy at birth of half a year–roughly a quarter of the two-year increase in life expectancy over this time period and a third of the increase in life expectancy at age 45. The decline brings cancer mortality to its lowest level in 60 years. In the war on cancer, optimism has replaced pessimism. In this paper, I evaluate the reasons for the reduction in cancer mortality. I highlight three factors as leading to improved survival. Most important is cancer screening: mammography for breast cancer and colonoscopy for colorectal cancer. These technologies have had the largest impact on survival, at relatively moderate cost. Second in importance are personal behaviors, especially the reduction in smoking. Tobacco-related mortality reduction is among the major factors associated with better health, likely at a cost worth paying. Third in importance, and more controversial, are treatment changes. Improvements in surgery, radiation, and chemotherapy have contributed to improved survival for a number of cancers, but at high cost. The major challenge for cancer care in the future is likely to be the balancing act between what we are able to do and what it makes sense to pay for.\”

How Republican and Democratic Professors Grade

Talia Bar and Asaf Zussman article explore \”Partisan Grading\” in the most recent issue (vol. 4, number 1) of the American Economic Journal: Applied Economics. The  article is not freely available on-line, although many can get on-line access to it if your library has a subscription to the publications of the American Economic Association. As they explain in their abstract: \”We study grading outcomes associated with professors in an elite university in the United States who were identified—using voter registration records from the county where the university is located—as either Republicans or Democrats. The evidence suggests that student grades are linked to the political orientation of professors.\”

Their useful starting point is that they have data on SAT scores for students–which are on average a good predictor of college grades. Measured by this standard, they show that the distribution of students headed into the classrooms of Republican and Democrat professors is essentially the same. However, the grading outcomes are not the same.

They have data from \”the College of Arts and Sciences of an elite university in the United States between the spring semester of 2000 and the spring semester of 2004.\” More specifically, they have grades of 17,062 students taking 3,277 undergraduate level courses with 417 professors. They find:

\”[T]he variance of grades is higher in courses taught by Republicans than in courses taught by Democrats. Moreover, in additional analysis, we find that relative to their Democratic colleagues, Republican professors tend to assign more very low and very high grades. The share of the lowest grades (F, D−, D, D+, and C−) out of the total is 6.2 percent in courses taught by Republican professors and only 4.0 percent in courses taught by Democratic professors. The share of the highest grade (A+) out of the total is 8.0 percent in courses taught by Republican professors and only 3.5 percent in courses taught by Democratic professors. Both differences are highly statistically significant.\”

This general  pattern holds up after adjusting for differences in grading across academic departments. Here\’s a graphical illustration of the same general theme. The horizontal axis shows SAT scored of students; the vertical axis shows the average grade received by students. Notice that students with low SAT scores–say, under 1200–receive an average grade of about 2.4 in classes taught by Republican professors, but 2.9 in classes taught by Democrat professors. At the higher end of the range, students with about 1400 on their SATs up to about 1600 get roughly the same grades on average from Democrat professors, at about 3.4. However, students with SAT scores in the top 1560-1600 range on average get grades of about 3.6 from Republican professors.

The authors are at pains to emphasize that there are multiple possible interpretations of these findings.

\”One interpretation is that it reflects a difference in grading practices but not in student performance. In other words, an identical distribution of student performance will translate into different distributions of grades, with Republican professors tending more than their Democratic colleagues to assign low grades to low-ability students and high grades  to high-ability students.

\”An alternative way to interpret the finding is that it reflects a difference in student performance but not in grading practices. The difference in student performance could be related to the amount of effort professors are willing to invest in helping students of different abilities or in the extent to which professors encourage students of different abilities. For example, it is possible that Democratic professors would devote more resources (e.g., in office hours time) to helping low-ability students, while Republican professors would devote more resources to nurturing high-ability students. It may also be the case that Republicans have different teaching or testing styles than Democrats (for example, with different needs for memorization or creativity), and that student performance varies across these heterogeneous learning environments. An additional possibility is that Democrats differentially reward something other than pre-existing talent.

\”These interpretations are all consistent with our hypothesis. … The important point from our perspective is that the evidence suggests that Republican professors are associated with less egalitarian grading outcomes.\”

 While Barr and Zussman are professionally cautious in their interpretation, readers are of course free  to offer their own interpretations. Let the parade of anecdote, speculation, innuendo and overstatement commence!