Inequalities of Crime Victimization and Criminal Justice

Many Americans worry about high incarceration rates and a police presence that can be heavy-handed or worse in some communities. Many Americans also are worrying about crime. For example, here\’s a Gallup poll result from early March:

160407CrimeandDrugs_1

And law-abiding people in some communities, many of them predominantly low-income and African-American, can end up facing an emotionally crucifying choice. One one side, crime rates in their community are high, which is a terrible and sometimes tragic and fatal burden on everyday life. On the other side, they are watching a large share of their community, mainly men, becoming involved with the criminal justice system through fines, probation, fines, or incarceration. Although those who are convicted of crimes are the ones who officially bear the costs, in fact the costs when someone needs to pay fines, or can\’t earn much or any income, or can only be visited by making a trip to a correctional facility are also shared with families, mothers, and children. Magnus Lofstrom and Steven Raphael explore these questions of \”Crime, the Criminal Justice System, and Socioeconomic Inequality\” in the Spring 2016 issue of the Journal of Economic Perspectives.

(Full disclosure: I\’ve worked as the Managing Editor of the Journal of Economic Perspectives for 30 years. All papers appearing in the journal, back to the first issue in Summer 1987, are freely available online, compliments of the American Economic Association.)

It\’s well-known that rates of violent and property crime have fallen substantially in the US in the last 25 years or so. What is less well-recognized is that the biggest reductions in crime have happened in the often predominantly low-income and African-American communities that were most plagued by crime. Loftrom and Raphael look at crime rates across cities with lower and higher rates of  poverty in 1990 and 2008:

\”However, the inequality between cities with the highest and lower poverty rates narrows considerably over this 18-year period. Here we observe a narrowing of both the ratio of crime rates as well as the absolute difference. Expressed as a ratio, the 1990 violent crime rate among the cities in the top poverty decile was 15.8 times the rate for the cities in the lowest poverty decile. By 2008, the ratio falls to 11.9. When expressed in levels, in 1990 the violent crime rate in the cities in the upper decile for poverty rates exceeds the violent crime rate in cities in the lowest decile for poverty rates by 1,860 incidents per 100,000. By 2008, the absolute difference in violent crime rates shrinks to 941 per 100,000. We see comparable narrowing in the differences between poorer and less-poor cities in property crime rates.\”

As another example, Lofstrom and Raphael refer to a study which broke down crime rates in Pittsburgh across the \”tracts\” used in compiling the US census. As overall rates of crime fell in Pittsburgh, predominantly African-American areas saw the biggest gains:

\”The decline in violent crime in the 20 percent of tracts with the highest proportion black amounts to 54 percent of the overall decline in violent crime citywide. These tracts account for 23 percent of the city’s population, have an average proportion black among tract residents of 0.78 and an average proportion poor of 0.32. Similarly, the decline in violent crime in the poorest quintile of tracts amounts to 60 percent of the citywide decline in violent crime incidents, despite these tracts being home to only 17 percent of the city’s population.\”

It remains true that one of the common penalties for being poor in the United States is that you are more likely to live in a neighborhood with a much higher crime rate. But as overall rates of crime have fallen, the inequality of greater vulnerability to crime has diminished.

On the other side of the crime-and-punishment ledger, low-income and African-American men are more likely to end up in the criminal justice system. Lofstrom and Raphael give sources and studies for the statistics: \”[N]nearly one-third of black males born in 2001 will serve prison time at some point in their lives. The comparable figure for Hispanic men is 17 percent …  [F]or African-American men born between 1965 and 1969, 20.5 percent had been to prison by 1999. The comparable figures were 30.2 percent for black men without a college degree and approximately 59 percent for black men without a high school degree.\”

I\’m not someone who sympathizes with or romanticizes those who commit crimes. But economics is about tradeoffs, and imposing costs on those who commit crimes has tradeoffs for the rest of society, too. For example, the cost to taxpayers is on the order of $350 billion per year, which in 2010 broke down as \”$113 billion on police, $81 billion on corrections, $76 billion in expenditure by various federal agencies, and $84 billion devoted to combating drug trafficking.\” The question of whether those costs should be higher or lower, or reallocated between these categories, is a worthy one for economists.

But the costs explicitly imposed by the legal system are only part of the picture. For example, living in a community where it is common for you to experience or watch as people are regularly stopped and frisked is a cost, too. Lofstrom and Raphael discuss \”collateral consequence studies: about how being in the criminal justice system affects employment prospects, health outcomes, and problem behaviors and and depression among children of the incarcerated. In addition, many local jurisdictions have dramatically increased their use of fines in the last couple of decades,, which can often end up being a high enough fraction of annual income for a low-income worker that they become nearly impossible to pay–then leading to additional fines or more jail time. The US Department of Justice Civil Rights Division report following up on practices in Ferguson, Missouri, noted an \”aggressive use of fines and fees imposed for minor crimes, with this revenue accounting for roughly one-fifth of the city’s general fund sources.\” As Lofstrom and Raphael explain:

\”Money is fungible. When fines and fees are imposed as part of a criminal prosecution, at least some of the financial burden will devolve on to the household of the person involved with the criminal justice system. When someone who is involved in the criminal justice system has reduced employment prospects, some of those financial costs will again be borne by others in their household. We have said nothing about the family resources devoted to replenishing inmate commissary accounts, the devotion of household resources to prison phone calls, time devoted to visiting family members, and the other manners by which a family member’s involvement with the criminal justice system may tax a household’s resources. To our knowledge, aggregate data on such costs do not exist.\”

I wrote a few weeks back about how the empirical evidence on \”Crime and Incarceration: Correlation, Causation, and Policy\” (April 29, 2016). Yes, there is a correlation that incarceration rates have risen in the US as crime has fallen. But a more careful look at the evidence strongly suggests that while the rise in incarceration rates probably did contribute to bringing down crime rates in the 1980s or into the early 1990s, but the continuing rise in incarceration rates since then seems to brought diminishing returns–and at this point, near-zero returns–in reducing crime further.

Lofstrom and Raphael conclude:

\”Many of the same low-income predominantly African American communities have disproportionately experienced both the welcome reduction in inequality for crime victims and the less-welcome rise in inequality due to changes in criminal justice sanctioning. While it is tempting to consider whether these two changes in inequality can be weighed and balanced against each other, it seems to us that this temptation should be resisted on both theoretical and practical grounds. On theoretical grounds, the case for reducing inequality of any type is always rooted in claims about fairness and justice. In some situations, several different claims about inequality can be combined into a single scale—for example, when such claims can be monetized or measured in terms of income. But the inequality of the suffering of crime victims is fundamentally different from the inequality of disproportionate criminal justice sanctioning, and cannot be compared on the same scale. In practical terms, while higher rates of incarceration and other criminal justice sanctions may have had some effect in reducing crime back in the 1970s and through the 1980s, there is little evidence to believe that the higher rates have caused the reduction in crime in the last two decades. Thus, it is reasonable to pursue multiple policy goals, both seeking additional reductions in crime and in the continuing inequality of crime victimization and simultaneously seeking to reduce inequality of criminal justice sanctioning. If such policies are carried out sensibly, both kinds of inequality can be reduced without a meaningful tradeoff arising between them.\” 

While accusations of police brutality are often the flashpoint for public protests over the criminal justice system, my own suspicion is that some of the anger and despair focused on the police is because they are the visible front line of the criminal justice system. It would be interesting to watch the dynamics if protests of similar intensity were aimed at legislators who pass a cavalcade of seemingly small fines, which when imposed by judges add up to an insuperable burden for low-income families. Or if the protests were aimed at legislators, judges, and parole boards who make decisions about length of incarceration. Or if the protests were aimed at prisons and correctional officers. My own preference for the criminal justice system (for example, here and here) would be to rebalance the nation\’s criminal justice spending, with more going to police and less coming in  fines, and the offsetting funding to come from reducing the sky-high levels of US incarceration. The broad idea is to spend more on tamping down the chance that crime will occur or escalate in the first place, while spending less on years of severe punishments after the crime has already happened.

Ray Fair: The Economy is Tilting Republican

Ray Fair is an eminent macroeconomist,  as well as  a well-known textbook writer (with Karl Case and Sharon Oster) who dabbles now and again in sports economics. Here I focus on one of Fair\’s other interests: the connection from macroeconomic to election outcomes, a topic where he has been publishing an occasional series of papers since 1978. With time and trial-and-error, Fair has developed a formula where anyone can plug in a few key economic statistics and obtain a prediction for the election. A quick overview of the calculations, along with links to some of Fair\’s recent papers on this subject, are available at Fair\’s website.

Fair\’s equation to predict the 2016 presidential election is
VP = 42.39 + .667*G – .690*P + 0.968*Z

On the left-hand side of the equation, VP is the Democratic share of the presidential vote. Given that a Democrat is in office, a legacy of economic growth should tend to favor the Democratic candidate, while inflation would tend to work against the Democrat. On the right-hand side, G is the growth rate of real per capita GDP in the first 3 quarters of the election year (at an annual rate); P is the growth rate of the GDP deflator (a measure of inflation based on everything in the GDP, rather than just on consumer spending as in the better-known Consumer Price Index); and Z is the number of quarters in the first 15 quarters of the second Obama administration in which the growth rate of real per capita GDP is greater than 3.2 percent at an annual rate.

Obviously, some of these variables aren\’t yet known, because the first three-quarters of 2016 haven\’t happened yet. But here are Fair\’s estimates of the variables as of late April: G=0.87; P=1.28; Z=3. Plug those numbers into the formula, and the prediction is that the Democratic share of the two-party presidential vote in 2016 will be 44.99%.

Fair offers a similar equation to predict the 2016 House elections. The formula is

VC = 44.09 + .372*G – .385*P + 0.540*Z

where VC is the Democratic share of the two-party vote in Congressional elections. Plugging in the values for G, P and Z, the prediction is 45.54% of the House vote for Democrats.

Of course, these formulas raise a number of questions. Where do these numbers and this formula come from? Why use these variables about economic growth rather than, say, the unemployment rate? Why measure inflation with the GDP deflator rather than with the Consumer Price Index? Where did the coefficient numbers come from?

The short answer to all these questions is that Fair\’s equations are chosen so that, if one looks back at historical election data from 1916 up through 2014, this equation is both fairly simple and does a pretty good job in predicting all the elections over time with the smallest possible error. The long answer to why these specific variables were chosen and how the equation is estimated is that you need to read the research papers at Fair\’s website.

Is there reason to believe that a correlation between the macroeconomy and election outcomes has existed during the last century or so of national elections, it will also hold true in 2016? Of course, Fair isn\’t making any claim that the macroeconomy fully determines election outcomes. Every election has lots of idiosyncratic factors related to the particular candidates and the events of the time. Correlations are just a way of describing or summarizing earlier patterns in the data. Fair\’s equation tell how macroeconomic factors have been correlated with election outcomes, based on the past historical record, but it doesn\’t have anything to say about all the other factors in a national election. For example, the predictions of the equation for the  Democratic vote were way low in 1992, when Bill Clinton was elected, and also  in 2004, when George W. Bush was re-elected. On the other side, predictions from the equation of the Democratic share of the vote were too high in 1984 and 1988, when Ronald Reagan was re-elected and then George Bush was elected.

At the most basic level, Fair\’s equation is just saying that a slow rate of economic growth during 2016, along with the fact that there haven\’t been many rapid quarters of economic growth during the Obama presidency, will tend to make it harder for Democrats to win in 2016. But correlation doesn\’t prove causation, as Fair knows as well as anyone and better than most, and he would be the last one to overstate how much weight to give to these kinds of formulas. Back in 1996, Fair provided a nontechnical overview of this work in \”Econometrics and Presidential Elections,\” appearing in the Journal of Economic Perspectives (where I work as Managing Editor). He wrote there: 

\”The main interest in this work from a social science perspective is how economic events affect the behavior of voters. But this work is also of interest from the perspective of learning (and teaching) econometrics. The subject matter is interesting; the voting equation is easy to understand; all the data can be put into a small table; and the econometrics offers many potential practical problems. … Thus, this paper is aimed in part at students taking econometrics, with the hope that it may serve as an interesting example of how econometrics can be used (or misused?). Finally, this work is of interest to the news media, which every fourth year becomes fixated on the presidential election. Although I spend about one week every four years updating the voting equation, some in the media erroneously think that I am a political pundit—or at least they have a misleading view of how I spend most of my days.\”

What Was Different About Housing This Time?

Everyone knows that the Great Recession was tangled up with a housing boom that went bust. But more precisely, what was different about housing in the most recent business cycle? Burcu Eyigungor discusses \”Housing’s Role in the Slow Recovery\” in the Spring 2016 issue of Economic Insights, published by the Federal Reserve Bank of Philadelphia.

As a starting point, here\’s private residential fixed investment–basically, spending on home and apartment construction and major renovations–as a share of GDP going back to 1947. Notice that this category of investment falls during every recession (shown by the shaded areas) and then usually starts bouncing back just before the end of the recession–except for the period after 2009.

The most recent residential building cycle looks different. Eyigungor explains:

The housing boom from 1991 to 2005 was the longest uninterrupted expansion of home construction as a share of overall economic output since 1947 (Figure 1). During the 1991 recession, private home construction had constituted 3.5 percent of GDP, and it increased its share of GDP without any major interruptions to 6.7 percent in 2005. This share was the highest it had been since the 1950s. Just like the boom, the bust that followed was also different from earlier episodes. During the bust, private residential investment as a share of GDP fell to levels not seen since 1947 and has stayed low even after the end of the recession in 2009. In previous recessions, the decline in residential construction was not only much less severe, but the recovery in housing also led the recovery in GDP. As Federal Reserve Chair Janet Yellen has pointed out, in the first three years of this past recovery, homebuilding contributed almost zero to GDP growth.

There are two possible categories of reasons for the very low level of residential building since 2009. On the supply side, it may not seem profitable to build, given what was already built back before 2008 and the lower prices. On the demand side, one aftermath of the Great Recession could plausibly be that at least some people are feeling economically shaky and mistrustful of real estate markets. and so not eager to buy.

Both supply and demand presumably played some role. But housing prices have now been rising again for about three years, and the \”vacancy\” rates for owner-occupied housing and rental housing are back to the levels from before the Great Recession. In that sense, it doesn\’t look as if an overhang of empty dwellings or especially low prices are the big problem for the housing market. Instead, Eyigungor argues that the demand side of the housing market is holding back the housing market.

In particular, the demand for housing is tied up with the rate of \”household formation\”–that is, the number of people who are starting new households. The level of household formation was low for years after 2009 (and remember that these low levels are in the context of a larger population than several decades ago, so the rate of household formation would be lower still).

The rates of homeownership have now declined back to levels from the 1980s, and the share of renters has risen. \”This decline has lowered overall housing expenditures, because homeowners on average spend more on housing than renters do because of the tax incentives of homeownership and holding a mortgage. Together, the declines in household formation and homeownership contributed to the decline in residential expenditures as a share of GDP.\”

Spending on housing usually helps lead the US economy out of recession, but not this time. The demand from new household formation hasn\’t been there. As I\’ve pointed out in the past, both the Clinton administration with its National Homeownership Strategy and the Bush administration with its \”ownership society\” did a lot of bragging about that rise in homeownership rates from the mid-1990s up through about 2007. The gains to homeownership from those strategies has turned out to be evanescent, while some costs associated with those strategies have been all too real. 

Mayfly Years: Thoughts on a Five-Year Blogging Anniversary

The first post on this blog went up five years ago, on May 17, 2011: the first three posts are here, here, and here. When it comes to wedding anniversaries, the good folks at Hallmark  inform me that a five-year anniversary is traditionally \”wood.\” But I suspect that blogs age faster than marriage. How much faster? There\’s an old and probably unreliable saying that a human year is seven dog-years. But when it comes to blogging, mayfly-years may be a more appropriate metric. The mayfly typically lives for only one day, maybe two. I\’ve put up over 1,300 posts in the last five years, probably averaging roughly 1,000 words in length. Dynasties of mayflies have risen and fallen during the life of this blog.

Writing a blog 4-5 times each week teaches you some things about yourself.  I\’ve always been fascinated by the old-time newspaper columnists who churned out 5-6 columns every week, and I wondered if I could do that. In the last five years, I\’ve shown myself that I could.  The discipline of writing the blog has been good for me, pushing me to track down and read reports and articles that might otherwise have just flashed across my personal radar screen before disappearing. I\’ve used the blog as a memory aid, so that when I dimly recall having seen a cool graph or read a good report on some subject, I can find it again by searching the blog–which is a lot easier than it used to be to search my office, or my hard drive, or my brain. My job and work-life bring me into contact with all sorts of interesting material that might be of interest to others, and it feels like a useful civic or perhaps even spiritual discipline to shoulder the task of passing such things along.

It\’s also true that writing a regular blog embodies some some less attractive traits: a compulsive need to broadcast one\’s views; an obsession about not letting a few days or a week go by without posting; an egoistic belief that anyone else should care; a need for attention; and a desire to shirk other work. Ah, well. Whenever I learn more about myself, the lesson includes a dose of humility.

The hardest tradeoff in writing this blog is finding windows of time in the interstices of my other work and life commitments, and the related concern that by living in mayfly years, I\’m not spending that time taking a deeper dive into thinking and writing that would turn into essays or books.

In a book published last year, Merton and Waugh: A Monk, A Crusty Old Man, and The Seven Storey Mountain, Mary Frances Coady describes the correspondence between Thomas Merton and Evelyn Waugh in the late 1940s and early 1950s. Merton was a Trappist monk who was writing his autobiographical book The Seven Story Mountain. (Famous opening sentence: \”On the last day of January 1915, under the sign of the Water Bearer, in a year of a great war, and down in the shadow of some French mountains in the borders of Spain, I came into the world.\”) Waugh was already well-known, having published Brideshead Revisited a few years earlier. Merton\’s publisher sent the manuscript to Waugh for evaluation, and Waugh both offered Merton some comments and also ended up as the editor of the English edition.

Waugh sent Merton a copy of a book called The Reader over My Shoulder, by Robert Graves and Alan Hodge, one of those lovely short quirky books of advice to writers that I think is now out of print. Here\’s a snippet from one of the early letters from Waugh to Merton:

With regard to style, it is of course much more laborious to write briefly. Americans, I am sure you will agree, tend to be very long-winded in conversation and your method is conversational. I relish the laconic. … I fiddle away rewriting any sentence six times mostly out of vanity. I don\’t want anything to appear with my name that is not the best I am capable of. You have clearly adopted the opposite opinion … banging away at your typewriter on whatever turns up. …

But you say that one of the motives of your work is to raise money for your house. Well simply as a matter of prudence you are not going the best way about it. In the mere economics of the thing, a better return for labour results in making a few things really well than in making a great number carelessly. You are plainly undertaking far too many trivial tasks for small returns. …

Your superiors, you say, leave you to your own judgment in your literary work. Why not seek to perfect it and leave mass-production alone? Never send off any piece of writing the moment it is finished. Put it aside. Take on something else. Go back to it a month later re-read it. Examine each sentence and ask \”Does this say precisely what I mean? Is it capable of misunderstanding? Have I used a cliche where I could have invented a new and therefore asserting and memorable form? Have I repeated myself and wobbled around the point when I could have fixed the whole thing in six rightly chosen words? Am I using words in their basic meaning or in a loose plebeian way?\” … The English language is incomparably rich and can convey every thought accurately and elegantly. The better the writing the less abstruse it is. … Alas, all this is painfully didactic–but you did ask for advice–there it is.

In all seriousness, this kind of advice makes my heart hurt in my chest. Take the extra time to write briefly? Rewrite sentences six time? Put things away for a month and return to them? Bang away at the keyboard on whatever turns up? Far too many trivial tasks for small returns? Wobbled around the point instead of hunting for six well-chosen words? Many of these blog posts are knocked out an hour before bedtime, and I often don\’t reread even once before clicking on \”Publish.\”

Here are some snippets of Merton\’s response to Waugh:

I cannot tell you how truly happy I am with your letter and the book you sent. In case you think I am exaggerating I can assure you that in a contemplative monastery where people are supposed to see things clearly it sometimes becomes very difficult to see anything straight. It is so terribly easy to get yourself into some kind of a rut in which you distort every issue with your own blind bad habits–for instance rushing to finish a chapter before the bell rings and you will have to go and do something else.

It has been quite humiliating for me to find that my out (from Graves and Hodge) that my own bad habits are the same as those of every other second-rate writer outside the monastery. The same haste, distraction, etc. …. On the whole I think my haste is just as immoral as anyone else\’s and comes from the same selfish desire to get quick results with a small amount of effort. In the end, the whole question is largely an ascetic one!  …..

Really I like The Reader Over Your Shoulder very much. In the first place it is amusing. And I like their thesis that we are heading toward a clean, clear kind of prose. Really everything in my nature–and in my vocation, too–demands something like that if I am to go on writing. … You would be shocked to know how much material and spiritual junk can accumulate in the corners of a monastery and in the minds of the monks. You ought to see the pigsty in which I am writing this letter. There are two big crates of some unidentified printed work the monastery wants to sell. About a thousand odd copies of ancient magazines that ought to have been sent to the Little Sisters of the Poor, a dozen atrocious looking armchairs and piano stools that are used in the sanctuary for Pontifical Masses and stored on the back of my neck the rest of the time. Finally I am myself embedded in a small skyscraper of mixed books and magazines in which all kinds of surreal stuff is sitting on top of theology. …

I shall try to keep out of useless small projects that do nothing but cause a distraction and dilute the quality of what I turn out. The big trouble is that in those two hours a day when I get at a typewriter I am always having to do odd jobs and errands and I am getting a lot of letters from strangers, too. These I hope to take care of with a printed slip telling them politely to lay off the poor monk, let the guy pray. 

I find myself oddly comforted by the thought that a monastery may be just as cluttered, physically and metaphysically, as an academic office. But I\’m not sure what ultimate lessons to take away from these five-year anniversary thoughts. I don\’t plan to give up the blog, but it would probably be a good idea if I can find the discipline to shift along the quality-quantity tradeoff. Maybe trend toward 3-4 posts per week, instead of 4-5. Look for opportunities to write shorter, rather than longer. Avoid the trivial. Try to free up some time and see what I might be able to accomplish on some alternative writing projects. I know, I know, it\’s like I\’m making New Year\’s resolutions in May.  But every now and again, it seems appropriate to share some thoughts about this blogging experience.  Tomorrow the blog will return to its regularly scheduled economics programming.

Homage: I ran into part of the Waugh quotation from above in the \”Notable & Quotable\” feature of the Wall Street Journal on May 3, 2016, which encouraged me to track down the book.

Tradeoffs of Cultured Meat Production

A major technological innovation may be arriving in a very old industry: the production of meat. Instead of producing meat by growing animals, meat can instead be grown directly. The process has been happening in laboratories, but some are looking ahead to large-scale production of meat in \”carneries.\”

This technology has a number of implications, but here, I\’ll focus on some recent research on how a shift away from conventionally produced meat to cultured or in vitro meat production could help the environment. Carolyn S. Mattick, Amy E. Landis, Braden R. Allenby, and Nicholas J. Genovese tackle this question in \”Anticipatory Life Cycle Analysis of In Vitro Biomass Cultivation for Cultured Meat Production in the United States,\” published last September in Environmental Science & Technology (2015, v. 49, pp/ 11941−11949). One of the implications of their work is that factory-style meat production may produce real environmental gains for beef, but perhaps not for other meats.

Another complication is that not all production of vegetables has a lower environmental impact than, say, poultry or fresh fish. Michelle S. Tom,  Paul S. Fischbeck, and Chris T. Hendrickson provide some evidence on this point in their paper, \”Energy use, blue water footprint, and greenhouse gas emissions for current food consumption patterns and dietary recommendations in the US,\” published in Environment Systems and Decisions in March 2016 (36:1, pp. 92-103),

As background, one of the first examples of the new meat production technology happened back in 2001, when a team led by bioengineer Morris Benjaminson cut small chunks of muscle from goldfish, and then immersed the chunks in a liquid extracted from the blood of unborn calves that scientists use for growing cells in the lab. The New Scientist described the results this way in 2002:

\”After a week in the vat, the fish chunks had grown by 14 per cent, Benjaminson and his team found. To get some idea whether the new muscle tissue would make acceptable food, they washed it and gave it a quick dip in olive oil flavoured with lemon, garlic and pepper. Then they fried it and showed it to colleagues from other departments. \”We wanted to make sure it\’d pass for something you could buy in the supermarket,\” he says. The results look promising, on the surface at least. \”They said it looked like fish and smelled like fish, but they didn\’t go as far as tasting it,\” says Benjaminson. They weren\’t allowed to in any case–Benjamison will first have to get approval from the US Food and Drug Administration.\”

The first hamburger grown in a laboratory was served in London in 2013. As an article from Issues in Science and Technology reported at the time: \”\”From an economic perspective, cultured meat is still an experimental technology. The first in vitro burger reportedly cost about $335,000 to produce and was made by possible by financial support from Google cofounder Sergey Brin.\” But the price is coming down: a Silicon Valley start-up is now making meatballs from cultured meat at $18,000 per pound.

Mattick, Landis, Allenby, and Genovese an evaluation of environment effects over the full life-cycle of production: for example, this means including the environmental effects of agricultural products used to feed lifestock. They compare existing studies of the environmental effects of traditional production of beef, pork, and poultry with a 2011 study of the environmental effects of in vitro meat production and with their own study. (The 2011 study of in vitro meat production is \”Environmental Impacts of Cultured Meat Production,\” by Hanna L. Tuomisto and M. Joost Teixeira de Mattos, appearing in Environmental Science & Technology, 2011, 45, pp. 6117–6123). They summarize the results of their analysis along four dimensions: industrial energy use, global warming potential, eutrophication potential (that is, addition of chemical nutrients like nitrogen and phosphorus to the ecosystem), and land use.

Here\’s the summary of industrial energy use, which they view as definitely higher for in vitro meat than for pork and poultry, and likely to be higher for beef. They explain:

\”These energy dynamics may be better understood through the analogy of the Industrial Revolution: Just as automobiles and tractors burning fossil fuels replaced the external work done by horses eating hay, in vitro biomass cultivation may similarly substitute industrial processes for the internal, biological work done by animal physiologies. That is, meat production in animals is made possible by internal biological functions (temperature regulation, digestion, oxygenation, nutrient distribution, disease prevention, etc.) fueled by agricultural energy inputs (feed). Producing meat in a bioreactor could mean that these same functions will be performed at the expense of industrial energy, rather than biotic energy. As such, in vitro biomass cultivation could be viewed as a renewed wave of industrialization.\” 

With regard to global warming potential, in vitro production of meat is estimated to be lower than beef, but higher than poultry and pork.

The other two dimensions are eutrophication and land use. Eutrophication basically involves effects of fertilizer use, which for traditional meat production involves both agricultural production and disposal of animal waste products. The environmental effects of in vitro meat production are quite low here, as is the effect of in vitro meat production on land use.

Of course, these estimates are hypothetical. No factory-scale production of cultured meat exists yet. But if the \”carnery\” does become a new industry in the next decade or so, these kinds of tradeoffs will be part of the picture.

As I noted above, it jumps out from these figures that traditional production of beef has a much more substantial environmental footprint than production of poultry or pork. In their paper, Tom, Fischbeck, and Hendrickson take on a slightly different question: what is the environmental impact of some alternative diet scenarios: specifically, fewer calories with the same mixture of foods, or same calories with an alternative mixture of foods recommended by the US Department of Agriculture, or both lower calories and the alternative diet. The USDA-recommended diet involves less sugar, fat, and mean, and more fruits, vegetables, and dairy.  But counterintuitively (at least for me), they find that the reduced calorie, altered diet choice has larger environmental effects than the current dietary choices. They write:

However, when considering both Caloric reduction and a dietary shift to the USDA recommended food mix, average energy use increases 38 %, average blue water footprint increases 10 %, and average GHG [greenhouse gas] emissions increase 6%.

Why does a shift away from meat and toward fruits and vegetables create larger environmental effects? The authors do a detailed breakdown of the environmental costs of various foods along their three dimensions of energy use, blue water footprint, and greenhouse gas emissions. Here\’s an overall chart. An overall message is that while meat (excluding poultry) is at the top on greenhouse gas emissions, when it comes to energy use and blue water footprint, meat is lower than fruit and vegetables.

As the authors write: \”[T]his study’s results demonstrate how the environmental benefits of
reduced meat consumption may be offset by increased consumption of other relatively high impact foods, thereby challenging the notion that reducing meat consumption automatically reduces the environmental footprints of one’s diet. As our results show food consumption behaviors are more complex, and the outcomes more nuanced.\” For a close-up illustration of the theme, here\’s a chart from Peter Whoriskey at the the Washington Post Wonkblog, created based on supplementary materials from the Tom, Fischbeck and Hendrickson paper. A striking finding is that on the dimension of greenhouse gas emissions, beef is similar to lettuce. The greenhouse gas emissions associated with production of poultry are considerably lower than for yogurt, mushrooms, or bell peppers.

Again, the environmental costs of beef in particular are  high. If cultured meat could replace production of beef in a substantial way, it might bring overall environmental gains. But making defensible statements about diet and the environment seem to require some nuance. Lumping beef, pork, poultry, shellfish, and other fish all into one category called \”meat\” covers up some big differences, as does lumping all fruits into a single category or all vegetables into a single category.

Addendum: This post obviously focuses on environmental tradeoffs, not the economic tradeoffs that cultured meat would pose for farmers or the animal welfare tradeoffs for livestock. Jacy Reese writes about \”The Moral Demand for Cultured Meat\” from an animal welfare perspective in Salon, February 13, 2016.

The US Weighted by GDP and Residential Property Values

You\’ve seen those maps where the size of countries or states is distorted larger or smaller according to their population? Here\’s a US map in which the size of counties are adjusted by the value of the economic output produced in that county. It\’s one of those useful maps where you see the world in a different way, by Max Galka at the Metrocosm website.

usa gdp cartogram

It\’s also just to look at how the 10 largest US urban areas appear at the end of the animation.
us metro area gdp

Here\’s another from Galko in which the US counties are distorted larger or smaller according to the value of residential real estate in that county. Thus big urban areas like the Washington-DC corridor, the urban parts of California, as well cities in Florida, Texas, Illinois and other blossom in size.


Galko has a bunch of fascinating animations over at the Metrocosm website. As one more example, check out his animation of immigration flows to the US over the last two centuries.

Economics of Hosting the Olympics

The Summer Olympics starts in Rio de Janiero in August. From the financial point of view of the host city, it\’s very likely to be a money-losing proposition–just like almost all the other Olympic games in recent decades. Robert A. Baade and Victor A. Matheson describe the potential benefits and certain costs facing cities that host an Olympic Games in \”Going for the Gold: The Economics ofthe Olympics,\” in the Spring 2016 issue of the Journal of Economic Perspectives. They write:

\”In this paper, we explore the costs and benefits of hosting the Olympic Games. On the cost side, there are three major categories: general infrastructure such as transportation and housing to accommodate athletes and fans; specific sports infrastructure required for competition venues; and operational costs, including general administration as well as the opening and closing ceremony and security. Three major categories of benefits also exist: the short-run benefits of tourist spending during the Games; the long-run benefits or the “Olympic legacy” which might include improvements in infrastructure and increased trade, foreign investment, or tourism after the Games; and intangible benefits such as the “feel-good effect” or civic pride. Each of these costs and benefits will be addressed in turn, but the overwhelming conclusion is that in most cases the Olympics are a money-losing proposition for host cities; they result in positive net benefits only under very specific and unusual circumstances. Furthermore, the cost–benefit proposition is worse for cities in developing countries than for those in the industrialized world.\”

Consider the costs in a bit more detail. \”The International Olympic Committee requires that the host city for the Summer Games have a minimum of 40,000 hotel rooms available for spectators and an Olympic Village capable of housing 15,000 athletes and officials..\” For example, Rio had 25,000 hotel rooms, and thus committed to build an additional 15,000. Not surprisingly, lots of hotels typically go broke after a Games. Next come facilities themselves:

\”Even modern cities in high-income countries may need to build or expand an existing velodrome, natatorium, ski-jumping complex, or speed skating oval. Furthermore, modern football and soccer stadiums are generally incompatible with a full-size Olympic track, because including space for such a track would cause an undesirably large separation between the fans and the playing field. For this reason, Boston’s failed bid to host the 2024 Summer Games had proposed $400 million to build an entirely new stadium for the track and field events, despite the presence of four large existing outdoor sports stadiums in the area.\”

Then add running the Games themselves. Security costs alone for a Summer Olympics has been running at $1.6 billion. How does this compare to revenues received by the Organizing Committee for the Game? Here are some illustrative estimates from the 2010 Winter Games in Vancouver and the 2012 Summer Olympics in London. The revenues don\’t cover even one-third of the costs.

Thus, the economic case for hosting an Olympics must rely on lots and lots of spinoff benefits: construction jobs, tourist spending during the Games, a legacy of lasting infrastructure, improved recognition for the city that could help the city after the Games. But except in a few cases, these benefits turn out to shrink in size the more you look at them.

Sure, tourists come for an Olympics. But other tourists who want to avoid the Olympics–and locals, too–leave town. The number of international visitors to London during the months of those 2012 Summer Olympics was actually lower than the previous year, and Beijing saw a fall in international tourists in 2008 compared to the same months the previous year. Those extra hotel rooms do raise profits of the tourism industry, which is a boost to their shareholders and out-of-town management. Systematic studies show the Games offer a very small boost to the local economy when they are happening, but not a lasting gain. That promise of lasting infrastructure often ends up looking like a white elephant.

Many of the venues from the Athens Games in 2004 have fallen into disrepair. Beijing’s iconic “Bird’s Nest” Stadium has rarely been used since 2008 and has been partially converted into apartments, while the swimming facility next door dubbed the “Water Cube” was repurposed as an indoor water park at a cost exceeding $50 million (Farrar 2010). The Stadium at Queen Elizabeth Olympic Park in London, the site for most of the track and field events as well as the opening and closing ceremonies in 2012, was designed to be converted into a soccer stadium for local club West Ham United in order to avoid the “white elephant” problem. Before the Games, the stadium had an original price tag of £280 million. Cost overruns led to a final construction cost of £429 million, and then the conversion cost to remove the track and prepare the facility to accommodate soccer matches topped £272 million, of which the local club is paying only £15 million … 

The idea that an Olympics will raise the long-term visibility of a city works now and then: arguably it worked for the 2002 Winter Games in Salt Lake City and for the Summer Games in Barcelona in 1992. But more often, the city is either already just about as well-known as it is likely to be (London, Beijing, Rio) or it is small or remote enough that it doesn\’t benefit from greater long-term visibility (Lillehammer, Calgary).

The International Olympic Committee has professed to be concerned about the rising costs of the Games. I\’m skeptical about the depth of this concern. But if the concern is real, then the IOC will look at future bids differently. For example, it will encourage cities with lots of existing facilities that could be used for the Games, and discourage the building of fancy, costly new structures or hosting spectacularly expensive opening and closing ceremonies. The IOC might choose a limited number of cities and rotate the Games between them, or have the Games happen in the same city twice in succession. It\’s a crazy idea, I know, but maybe the focus could shift back from promotionalism to the actual athletes and events.

Committee Behavior and the Federal Reserve

Frustration with committees is a way of life. \”A group of the unwilling, chosen by the unfit, to do the unnecessary.\” \”A group of people who individually can do nothing but as a group decide that nothing can be done.\” \”A body that keeps minutes and wastes hours.\” But committees persist, because they have advantages besides time-wasting and diffusion of responsibility. When informed people with disparate knowledge can engage with each other in a substantive and comprehensive way, it is at least possible that errors can be avoided, insights sharpened, and outcomes improved. Indeed, Gary Charness and Matthias Sutter argue that \”Groups Make Better Self-Interested Decisions,\” in the Summer 2012 issue of the Journal of Economic Perspectives.

The Federal Reserve Open Market Committee is, yes, a committee. How well does it work? Kevin M. Warsh has had a front-row perspective. Warsh was a member of the Federal Reserve Board of Governors from 2006 to 2011–and thus, right through the heart of the Great Recession. He was asked by the Bank of England to review the deliberations of its own decision-making Monetary Policy Committee, and to study and consider how other central banks work as well. Warsh describes his insights in \”Institutional Design: Deliberations, Decisions, and Committee Dynamics,\” which appears as Chapter 4 in Central Bank Governance And Oversight Reform, edited by John H. Cochrane  and John B. Taylor, and just published by the Hoover Institution. 

How do you set up a committee to have the highest chance of reaching a wise conclusion? Warsh suggests that a combination of high-quality inputs, genuine deliberation, and optimal committee design all play a role. Otherwise, outcomes can fall short. Warsh writes: 

The literature identifies numerous interrelated theories that link internal management inadequacies to organizational failure. These include: • Janis’s canonical Groupthink theory (1972, 1982), which highlights the tendency of small, homogenous management teams to make suboptimal decisions; • Hambrick and Mason’s Upper Echelon theory (1984), which links organizational achievements to the composition and background of an organization’s senior management team; • Staw, Sandelands, and Dutton’s Threat Rigidity Effect theory (1981), which explains the tendency of management groups to stick rigidly to tried and tested techniques at times of threat and challenge, thereby increasing the risk of organizational failure among incumbents at times of secular change.

Does the Fed Open Market Committee operate in a way that seems likely to gain the benefits of committees and minimize the costs? Here\’s some basic background comparing the Fed to policy decision-making committees at other central banks.

Warsh writes: \”The FOMC’s institutional design is not inconsistent with sound practice. But there are certain institutional aspects of the FOMC which differ somewhat from best practice, at least as identified in the literature.\” Here are some examples of what he has in mind.

Successful committee don\’t have too many participants. \”By statute, the FOMC includes twelve voting members. … Policy deliberations, however, occur in a much larger institutional setting. Nineteen people convene in the discussion (voters and non-voters alike) and a total of about sixty people are in attendance, including a range of subject-matter experts on key
aspects of the economic and financial landscape.\”

The members of successful committees have independent information that they can bring to bear. \”While the Reserve Bank presidents are supported by large,independent staff s of economists to help inform their forecasts and policy judgments, I would note that the economic models and
forecasting tools are substantially similar across the Federal Reserve System. Th is explains, in part, the remarkable conformity of the so-called dot plots in the projections from FOMC participants.\”

A lack of disagreement suggests an insufficient breadth of views. W\”One simple mechanism for evaluating the breadth of views is to review trends in dissent: that is, the number of FOMC members who  voted against the majority policy stance. By both FOMC tradition and practice, the bar for lodging a dissenting vote is high. Neither Chairman Greenspan nor Chairman
Bernanke ever cast a vote in the minority. In contrast, the governor of the Bank of England was outvoted on nine occasions since 1997. And governors of the Federal Reserve, unlike Reserve Bank presidents, only rarely dissented in casting of votes. In the past decade, for example, there has been only one instance of dissent by a sitting governor.\”

In successful committees, people are willing to express their unvarnished opinions. But since the Fed started publishing transcripts of its meetings in 1993, albeit with a lag, \”Meade and Stasavage (2008) find evidence that the Fed’s post-1993 transcript policy led to deterioration in the quality of FOMC
deliberations. In the authors’ formulation, policymakers are motivated to achieve two goals in the policymaking process: making optimal policy decisions and garnering a good reputation in public
(often associated with conformity with the prevailing consensus). The existence of public transcripts, even with a lag, caused FOMC participants to voice less dissent in the meetings themselves and to
be less willing to change policy positions over time. For example, the number of dissenting opinions expressed by voting members fell from forty-eight (between 1989 and 1992) to twenty-seven (between 1994 and 1997).\”

In his comments on the Warsh paper, Peter Fisher, who spent a number of years at the New York Fed in the 1980s and 1990s, summarized what he viewed as Warsh\’s message in this way:

\”I had almost ten years at the FOMC table … I thought I understood the awkwardness of group accountability when more than once I saw the FOMC gravitate toward no one’s first choice and virtually no one’s second choice, and we ended up with third-best outcomes. But now I’m also worried about individual accountability of a pseudo-nature, which I’m afraid is the regime we now have …\” 

Here are links to all the chapters in Central Bank Governance And Oversight Reform, edited by John H. Cochrane  and John B. Taylor. A complete print copy is also available for purchase.

New Angles on Inequality in Life Expectancy

We know several facts about US life expectancy with a high degree of confidence. Overall, life expectancy is rising: indeed, it is rising for every age group. However, there have long been gaps in life expectancy between various groups, like those with higher and lower income levels. What we don\’t know about life expectancy with a high degree of confidence, and are still sorting out, is how the changes in life expectancy across groups are evolving. Janet Currie and Hannes Schwandt tackle this question in \”Mortality Inequality: The Good News from a County-Level Approach,\” published in the Spring 2016 issue of the Journal of Economic Perspectives.

Currie and Schwandt provide this quick overview of recent studies:

However, this overall decline in mortality rates has been accompanied by prominent recent studies highlighting that the gains have not been distributed equally (for example, Cutler, Lange, Meara, Richards-Shubik, and Ruhm 2011; Chetty et al. 2015; National Academies of Science, Engineering, and Medicine (NAS) 2015; Case and Deaton 2015). Indeed, several studies argue that when measured across educational groups and/or geographic areas, mortality gaps are not only widening, but that for some US groups, overall life expectancy is even falling (Olshansky et al. 2012; Wang, Schumacher, Levitz, Mokdad, and Murray 2013; Murray et al. 2006). It seems to have become widely accepted that inequality in life expectancy is increasing.

But Currie and Schwandt marshall evidence that is arguably more representative than what has been used in the  previous studies to argue that in many dimensions, the inequality of life expectancy across various groups is either declining or not changing by much.

One of the issues here is that many sources of data on mortality rates don\’t include measures of income. As an example of the issues here, I discussed the NAS study about growing inequality of life expectancy by income on this blog last September. However, that study (and several others) are based on the Health and Retirement Survey, which looks only at people over age 50. Indeed, every six years it chooses a sample of 50 year-olds and then tracks that sample every two years. It\’s wonderful data for studying certain choices of this population, and it does include mortality rates and past income. But the entire sample is 20,000 people over the age of 50, so it\’s tricky to project back to life expectancies at earlier ages and to do detailed comparisons across groups. The authors write: \”For example, Chetty et al. (2015) use mortality at age 40 to 63 to estimate income-specific trends in life expectancy, while NAS (2015) uses mortality at age 50 to 78, an approach that by construction does not consider developments at younger ages.\”

Those who want more detail on difficulties comparing mortality rates by group can go through the Currie and Schwandt article. Here, I want to focus on the county-level approach that they take. Detailed data on mortality rates by age, gender, race is available at the county level. We also have census data on poverty rates by county, median income by county, and other variables. The authors take three years: 1990, 2000, and 2010. They rank the counties in that year according to a certain factor–like poverty rates–and then look at how life expectancies varied across countries. By doing this exercise in 1990, 2000, and 2010, they can look at how inequality of life expectancy is varying over time.

Here\’s a sample of the results. Each pair of figures is divide up by age group and by gender. Focus on the upper-left figure for a moment. The blue triangles show the county-level mortality rate for females age 0-4 in counties ranging from the lowest poverty rates up to the highest percentile of poverty rates. To be specific there are 20 blue triangles, one for each five percentiles of the poverty distribution (that is, percentiles 1-5, 6-10, and so on up to 96-100). The blue line is a best-fit line for the blue triangles, and the upward slope shows that counties with higher poverty also have higher mortality rates. The green circles show estimates across counties for the year 2010, and the green line is a best-fit line for 2010. The dashed line is a best-fit line for 2000, but to keep the figures from getting to messy, the separate points aren\’t shown for 2000. Overall mortality rates fell from 1990 to 2010. Also, the slope of the green line is flatter than the slope of the blue line, which means that females in the 0-4 age group in high-poverty countries had closed the mortality gap to some extent with those in low-poverty countries.

When you look across these kinds of figures, you see variation across age groups in the size of the mortality decline: that is, the gap between the green and blue lines. You also see differences in how the slope of the mortality lines has changed: for the youngest age groups (which remember were not well-represented in the data from earlier studies), inequality of mortality has decreased, but for women in the 50+ age group, the level of mortality has dropped (green line lower than blue line) but the inequality of mortality is greater (green line steeper than blue line).

If you know anything about academic researchers,  you can imagine how Currie and Schwandt work through this kind of data with care and attention. For example, you can rank counties in other ways, like median income or high school dropouts or even the starting level of life expectancy. You can break down the mortality rates by race/ethnicity as well as by age and gender. Here are a few of the conclusions that emerge.

1) \”We find that inequality in mortality has fallen greatly among children. It is worth emphasizing that the reductions in mortality among African Americans, especially African-American males of all ages, are stunning and that is a major driver of the overall positive picture. This positive finding has been largely neglected in much of the discussion of overall mortality trends.\” The authors point out that good health earlier in life is somewhat predictive of good health later in life, so improved mortality among children is good news in both the short run and the long run.

2) The changes in inequality of mortality aren\’t being primarily driven by the rise of income inequality. There are lots of population segments where income inequality is up an mortality inequality is down. Moreover, as the authors point out, even the direction of causality from income inequality to health problems is questionable; instead, the direction of causality is often that poor health status leads to lower income levels. The drivers of mortality changes across groups probably have more to do with the rate of change in habits (smoking, obesity, perhaps opioids), as well as public policy changes over the last two decades affecting nutrition and access to health care.

3) \”Although our overall message is more positive than some earlier studies, we do find an alarming stagnation in mortality among white women aged 20 to 49.\”

4) Changes in smoking rates are probably part of the story here. Men had higher smoking rates than women, but then decreased those smoking rates more dramatically. This pattern probably explains why mortality gains among older women look relatively small (and even negative for some specific groups), and may also explain the greater inequality of mortality rates across counties by poverty level.

Here\’s how Currie and Schwandt conclude:

\”It sometimes seems as if the research literature on mortality is compelled in some way to emphasize a negative message, either about a group that is doing less well or about some aspect of inequality that is rising.  … We believe that a balanced approach to the mortality evidence, which recognizes real progress as well as areas in need of improvement, is more likely to result in sensible policymaking. After all, emphasizing the negative could send the message that “nothing works,” especially in the face of seemingly relentless increases in income inequality. We have emphasized considerable heterogeneity in the evolution of mortality inequality by age, gender, and race. Going forward, identifying social policies that have helped the poor and reduced mortality inequality is an important direction for future research. Similarly, understanding the reasons that some groups and age ranges have seen stagnant mortality rates will be important for mobilizing efforts to reduce inequality in mortality and improve the health of the poor.\”

(Full disclosure: I\’ve labored in the fields as the Managing Editor of the JEP for 30 years. All articles in JEP, both current and back to the first issue, are freely available online compliments of the American Economic Association.)

The Rise in Polarization: Both Real and Exaggerated?

Political polarization refers to the phenomenon that more people are self-identifying as being at one end or the other of the political spectrum, with fewer in-between. Jacob Westfall, Leaf Van Boven, John R. Chambers, and Charles M. Judd argue that the rise in  polarization is real, but also that people have an exaggerated perception of the extent of polarization, in \”The Perceiving Political Polarization in the United States: Party Identity Strength and Attitude Extremity Exacerbate the Perceived Partisan Divide.\” It was published in Perspectives on Psychological Science (2015, 10:2, pp. 145-158). (The journal isn\’t freely available online, but many readers may have access through library subscriptions.

Here\’s a figure that, with some explanation, tells their story. The bottom line shows a measure of actual polarization, which seems to have started rising in the 1990s. The top line shows the perception of polarization. The perception is consistently above the reality, and it\’s also more variable–for example, showing jumps in the 1980s and 1990s.

The definition of \”polarization\” here is based on opinion survey data from the ongoing American National Election Study, which started back in 1968. Part of the survey asks people a set of questions about their beliefs on various issues. Here\’s an example of the kind of question a respondent would read.

Some people think the government should provide fewer services, even in areas such as health and education, in order to reduce spending. Suppose these people are at one end of a scale, at point 1. Other people feel that it is important for the government to provide many more services even if it means an increase in spending. Suppose these people are at the other end, at point 7. And of course, some other people have opinions somewhere in between, at points 2, 3, 4, 5, or 6.

Other questions are about rights of the accused, defense spending, guaranteed jobs and income, urban unrest, and others. \”For each issue, the option at one end of the scale represents a stereotypic liberal response, whereas the option at the other end of the scale represents a stereotypic conservative response.\”

Based on this survey data, the basic measure of political polarization in the figure above is straightforward: average the scores for self-identified Republicans, average the scores for self-identified Democrats, and subtract to get the gap between the two. However, respondents to the survey were also asked about how they perceived the attitudes of the Democratic and Republican parties and their presidential candidates. From this data, you can calculate how self-identified Democrats perceive Republicans and how self-identified Republicans perceive Democrats, and compare the perception to the reality.

(It\’s worth a pause here  to note that studies of this kind have some obvious weaknesses. They don\’t ask identical questions every year: for example, there were questions on school busing in the 1970s and on cooperation with the USSR in the 1980s. In addition, it\’s possible that the mixture of people who self-identify as Republicans or Democrats has changed in some subtle ways over time. As with all social science studies and survey data, proceed with caution.)

What factors help to explain why the perceptions of polarization are consistently higher than the actuality?  Westfall, Van Boven, Chambers, and Judd suggest several interrelated reasons. One is that the very act of categorizing an \”other\” party leads to what is sometimes called \”groupiness,\” where people start to exaggerate characteristics of the other group. \”Analysis of the ANES reveal that both Democrats and Republicans see the other group as more polarized than their own group … Independents …  perceive the stances of both the parties and the presidential candidates as being closer together than do the partisan respondents on either side.\” Indeed, an ironic but totally understandable phenomenon emerges here: those who are most polarized are also most likely to perceive a high degree of polarization. Those who identify themselves as a \”strong Democrat\” or a \”strong Republican\” are most likely to see the other party as showing strong polarization. Those who have the most polarized beliefs themselves on the opinion scale are also most likely to believe that others are extremely polarized.

These findings ring true to me. Polarization does seem to be up. But those who bemoan it most loudly, on both sides of the political spectrum, are often bemoaning only how the other side has become polarized–not their own party or themselves personally.